00:00:00.001 Started by upstream project "autotest-per-patch" build number 132364 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.062 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.063 The recommended git tool is: git 00:00:00.063 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.016 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.029 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.044 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.044 > git config core.sparsecheckout # timeout=10 00:00:07.058 > git read-tree -mu HEAD # timeout=10 00:00:07.076 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.099 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.099 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.192 [Pipeline] Start of Pipeline 00:00:07.203 [Pipeline] library 00:00:07.204 Loading library shm_lib@master 00:00:07.204 Library shm_lib@master is cached. Copying from home. 00:00:07.222 [Pipeline] node 00:00:07.229 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.231 [Pipeline] { 00:00:07.239 [Pipeline] catchError 00:00:07.241 [Pipeline] { 00:00:07.251 [Pipeline] wrap 00:00:07.259 [Pipeline] { 00:00:07.267 [Pipeline] stage 00:00:07.269 [Pipeline] { (Prologue) 00:00:07.459 [Pipeline] sh 00:00:07.741 + logger -p user.info -t JENKINS-CI 00:00:07.762 [Pipeline] echo 00:00:07.764 Node: GP6 00:00:07.772 [Pipeline] sh 00:00:08.073 [Pipeline] setCustomBuildProperty 00:00:08.086 [Pipeline] echo 00:00:08.088 Cleanup processes 00:00:08.094 [Pipeline] sh 00:00:08.379 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.379 3535495 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.395 [Pipeline] sh 00:00:08.682 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.683 ++ grep -v 'sudo pgrep' 00:00:08.683 ++ awk '{print $1}' 00:00:08.683 + sudo kill -9 00:00:08.683 + true 00:00:08.700 [Pipeline] cleanWs 00:00:08.714 [WS-CLEANUP] Deleting project workspace... 00:00:08.714 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.722 [WS-CLEANUP] done 00:00:08.727 [Pipeline] setCustomBuildProperty 00:00:08.743 [Pipeline] sh 00:00:09.029 + sudo git config --global --replace-all safe.directory '*' 00:00:09.133 [Pipeline] httpRequest 00:00:09.523 [Pipeline] echo 00:00:09.525 Sorcerer 10.211.164.20 is alive 00:00:09.535 [Pipeline] retry 00:00:09.538 [Pipeline] { 00:00:09.557 [Pipeline] httpRequest 00:00:09.561 HttpMethod: GET 00:00:09.561 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.562 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.573 Response Code: HTTP/1.1 200 OK 00:00:09.574 Success: Status code 200 is in the accepted range: 200,404 00:00:09.574 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.051 [Pipeline] } 00:00:17.070 [Pipeline] // retry 00:00:17.077 [Pipeline] sh 00:00:17.359 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.376 [Pipeline] httpRequest 00:00:17.778 [Pipeline] echo 00:00:17.780 Sorcerer 10.211.164.20 is alive 00:00:17.790 [Pipeline] retry 00:00:17.793 [Pipeline] { 00:00:17.807 [Pipeline] httpRequest 00:00:17.812 HttpMethod: GET 00:00:17.813 URL: http://10.211.164.20/packages/spdk_f549a99538516b9ef68b9f999c3e563597d376e0.tar.gz 00:00:17.813 Sending request to url: http://10.211.164.20/packages/spdk_f549a99538516b9ef68b9f999c3e563597d376e0.tar.gz 00:00:17.820 Response Code: HTTP/1.1 200 OK 00:00:17.820 Success: Status code 200 is in the accepted range: 200,404 00:00:17.820 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f549a99538516b9ef68b9f999c3e563597d376e0.tar.gz 00:02:27.118 [Pipeline] } 00:02:27.139 [Pipeline] // retry 00:02:27.147 [Pipeline] sh 00:02:27.432 + tar --no-same-owner -xf spdk_f549a99538516b9ef68b9f999c3e563597d376e0.tar.gz 00:02:29.977 [Pipeline] sh 00:02:30.263 + git -C spdk log --oneline -n5 00:02:30.263 f549a9953 vhost_blk: return VIRTIO_BLK_S_UNSUPP for flush command 00:02:30.263 c02c5e04b scripts/bash-completion: Speed up rpc lookup 00:02:30.263 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:02:30.263 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:02:30.263 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:02:30.275 [Pipeline] } 00:02:30.290 [Pipeline] // stage 00:02:30.299 [Pipeline] stage 00:02:30.301 [Pipeline] { (Prepare) 00:02:30.318 [Pipeline] writeFile 00:02:30.333 [Pipeline] sh 00:02:30.618 + logger -p user.info -t JENKINS-CI 00:02:30.631 [Pipeline] sh 00:02:30.915 + logger -p user.info -t JENKINS-CI 00:02:30.928 [Pipeline] sh 00:02:31.215 + cat autorun-spdk.conf 00:02:31.215 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.215 SPDK_TEST_NVMF=1 00:02:31.215 SPDK_TEST_NVME_CLI=1 00:02:31.215 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.215 SPDK_TEST_NVMF_NICS=e810 00:02:31.215 SPDK_TEST_VFIOUSER=1 00:02:31.215 SPDK_RUN_UBSAN=1 00:02:31.215 NET_TYPE=phy 00:02:31.223 RUN_NIGHTLY=0 00:02:31.228 [Pipeline] readFile 00:02:31.253 [Pipeline] withEnv 00:02:31.256 [Pipeline] { 00:02:31.270 [Pipeline] sh 00:02:31.557 + set -ex 00:02:31.557 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:31.557 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:31.557 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.557 ++ SPDK_TEST_NVMF=1 00:02:31.557 ++ SPDK_TEST_NVME_CLI=1 00:02:31.557 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.557 ++ SPDK_TEST_NVMF_NICS=e810 00:02:31.557 ++ SPDK_TEST_VFIOUSER=1 00:02:31.557 ++ SPDK_RUN_UBSAN=1 00:02:31.557 ++ NET_TYPE=phy 00:02:31.557 ++ RUN_NIGHTLY=0 00:02:31.557 + case $SPDK_TEST_NVMF_NICS in 00:02:31.557 + DRIVERS=ice 00:02:31.557 + [[ tcp == \r\d\m\a ]] 00:02:31.557 + [[ -n ice ]] 00:02:31.557 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:31.557 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:31.557 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:31.557 rmmod: ERROR: Module irdma is not currently loaded 00:02:31.557 rmmod: ERROR: Module i40iw is not currently loaded 00:02:31.557 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:31.557 + true 00:02:31.557 + for D in $DRIVERS 00:02:31.557 + sudo modprobe ice 00:02:31.557 + exit 0 00:02:31.567 [Pipeline] } 00:02:31.581 [Pipeline] // withEnv 00:02:31.587 [Pipeline] } 00:02:31.601 [Pipeline] // stage 00:02:31.611 [Pipeline] catchError 00:02:31.612 [Pipeline] { 00:02:31.626 [Pipeline] timeout 00:02:31.627 Timeout set to expire in 1 hr 0 min 00:02:31.629 [Pipeline] { 00:02:31.644 [Pipeline] stage 00:02:31.646 [Pipeline] { (Tests) 00:02:31.661 [Pipeline] sh 00:02:31.986 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:31.986 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:31.986 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:31.986 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:31.986 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.986 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:31.986 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:31.986 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:31.986 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:31.986 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:31.986 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:31.986 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:31.986 + source /etc/os-release 00:02:31.986 ++ NAME='Fedora Linux' 00:02:31.986 ++ VERSION='39 (Cloud Edition)' 00:02:31.986 ++ ID=fedora 00:02:31.986 ++ VERSION_ID=39 00:02:31.986 ++ VERSION_CODENAME= 00:02:31.986 ++ PLATFORM_ID=platform:f39 00:02:31.986 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:31.986 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:31.986 ++ LOGO=fedora-logo-icon 00:02:31.986 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:31.986 ++ HOME_URL=https://fedoraproject.org/ 00:02:31.986 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:31.986 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:31.986 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:31.986 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:31.986 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:31.986 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:31.986 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:31.986 ++ SUPPORT_END=2024-11-12 00:02:31.986 ++ VARIANT='Cloud Edition' 00:02:31.986 ++ VARIANT_ID=cloud 00:02:31.986 + uname -a 00:02:31.986 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:31.986 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:32.922 Hugepages 00:02:32.922 node hugesize free / total 00:02:32.922 node0 1048576kB 0 / 0 00:02:32.922 node0 2048kB 0 / 0 00:02:32.922 node1 1048576kB 0 / 0 00:02:32.922 node1 2048kB 0 / 0 00:02:32.922 00:02:32.922 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:32.922 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:32.922 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:32.922 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:32.922 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:32.922 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:32.922 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:32.922 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:32.922 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:32.922 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:32.922 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:33.182 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:33.182 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:33.182 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:33.182 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:33.182 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:33.182 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:33.182 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:33.182 + rm -f /tmp/spdk-ld-path 00:02:33.182 + source autorun-spdk.conf 00:02:33.182 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.182 ++ SPDK_TEST_NVMF=1 00:02:33.182 ++ SPDK_TEST_NVME_CLI=1 00:02:33.182 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:33.182 ++ SPDK_TEST_NVMF_NICS=e810 00:02:33.182 ++ SPDK_TEST_VFIOUSER=1 00:02:33.182 ++ SPDK_RUN_UBSAN=1 00:02:33.182 ++ NET_TYPE=phy 00:02:33.182 ++ RUN_NIGHTLY=0 00:02:33.182 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:33.182 + [[ -n '' ]] 00:02:33.182 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.182 + for M in /var/spdk/build-*-manifest.txt 00:02:33.182 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:33.182 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:33.182 + for M in /var/spdk/build-*-manifest.txt 00:02:33.182 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:33.182 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:33.182 + for M in /var/spdk/build-*-manifest.txt 00:02:33.182 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:33.182 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:33.182 ++ uname 00:02:33.182 + [[ Linux == \L\i\n\u\x ]] 00:02:33.182 + sudo dmesg -T 00:02:33.182 + sudo dmesg --clear 00:02:33.182 + dmesg_pid=3536817 00:02:33.182 + [[ Fedora Linux == FreeBSD ]] 00:02:33.182 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:33.182 + sudo dmesg -Tw 00:02:33.182 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:33.182 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:33.182 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:33.182 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:33.182 + [[ -x /usr/src/fio-static/fio ]] 00:02:33.182 + export FIO_BIN=/usr/src/fio-static/fio 00:02:33.182 + FIO_BIN=/usr/src/fio-static/fio 00:02:33.182 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:33.182 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:33.182 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:33.182 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:33.182 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:33.182 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:33.182 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:33.182 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:33.182 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:33.182 09:36:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:33.182 09:36:09 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:33.182 09:36:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:33.182 09:36:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:33.182 09:36:09 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:33.182 09:36:10 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:33.182 09:36:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:33.182 09:36:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:33.182 09:36:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:33.182 09:36:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.182 09:36:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.183 09:36:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.183 09:36:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.183 09:36:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.183 09:36:10 -- paths/export.sh@5 -- $ export PATH 00:02:33.183 09:36:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.183 09:36:10 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:33.183 09:36:10 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:33.183 09:36:10 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732091770.XXXXXX 00:02:33.183 09:36:10 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732091770.ZxyFSL 00:02:33.183 09:36:10 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:33.183 09:36:10 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:33.183 09:36:10 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:33.183 09:36:10 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:33.183 09:36:10 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:33.183 09:36:10 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:33.183 09:36:10 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:33.183 09:36:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.183 09:36:10 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:33.183 09:36:10 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:33.183 09:36:10 -- pm/common@17 -- $ local monitor 00:02:33.183 09:36:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.183 09:36:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.183 09:36:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.183 09:36:10 -- pm/common@21 -- $ date +%s 00:02:33.183 09:36:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.183 09:36:10 -- pm/common@21 -- $ date +%s 00:02:33.183 09:36:10 -- pm/common@25 -- $ sleep 1 00:02:33.183 09:36:10 -- pm/common@21 -- $ date +%s 00:02:33.183 09:36:10 -- pm/common@21 -- $ date +%s 00:02:33.183 09:36:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091770 00:02:33.183 09:36:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091770 00:02:33.183 09:36:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091770 00:02:33.183 09:36:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091770 00:02:33.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091770_collect-cpu-load.pm.log 00:02:33.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091770_collect-vmstat.pm.log 00:02:33.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091770_collect-cpu-temp.pm.log 00:02:33.442 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091770_collect-bmc-pm.bmc.pm.log 00:02:34.382 09:36:11 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:34.382 09:36:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:34.382 09:36:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:34.382 09:36:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.382 09:36:11 -- spdk/autobuild.sh@16 -- $ date -u 00:02:34.382 Wed Nov 20 08:36:11 AM UTC 2024 00:02:34.382 09:36:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:34.382 v25.01-pre-201-gf549a9953 00:02:34.382 09:36:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:34.382 09:36:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:34.382 09:36:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:34.382 09:36:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:34.382 09:36:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:34.382 09:36:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.382 ************************************ 00:02:34.382 START TEST ubsan 00:02:34.382 ************************************ 00:02:34.382 09:36:11 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:34.382 using ubsan 00:02:34.382 00:02:34.382 real 0m0.000s 00:02:34.382 user 0m0.000s 00:02:34.382 sys 0m0.000s 00:02:34.382 09:36:11 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:34.382 09:36:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:34.382 ************************************ 00:02:34.382 END TEST ubsan 00:02:34.382 ************************************ 00:02:34.382 09:36:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:34.382 09:36:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:34.382 09:36:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:34.382 09:36:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:34.382 09:36:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:34.382 09:36:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:34.382 09:36:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:34.382 09:36:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:34.382 09:36:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:34.382 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:34.382 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:34.640 Using 'verbs' RDMA provider 00:02:45.567 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:55.563 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:55.564 Creating mk/config.mk...done. 00:02:55.564 Creating mk/cc.flags.mk...done. 00:02:55.564 Type 'make' to build. 00:02:55.564 09:36:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:02:55.564 09:36:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:55.564 09:36:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:55.564 09:36:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.564 ************************************ 00:02:55.564 START TEST make 00:02:55.564 ************************************ 00:02:55.564 09:36:32 make -- common/autotest_common.sh@1129 -- $ make -j48 00:02:55.829 make[1]: Nothing to be done for 'all'. 00:02:57.754 The Meson build system 00:02:57.754 Version: 1.5.0 00:02:57.754 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:57.754 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.754 Build type: native build 00:02:57.754 Project name: libvfio-user 00:02:57.754 Project version: 0.0.1 00:02:57.754 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:57.754 C linker for the host machine: cc ld.bfd 2.40-14 00:02:57.754 Host machine cpu family: x86_64 00:02:57.754 Host machine cpu: x86_64 00:02:57.754 Run-time dependency threads found: YES 00:02:57.754 Library dl found: YES 00:02:57.754 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:57.754 Run-time dependency json-c found: YES 0.17 00:02:57.754 Run-time dependency cmocka found: YES 1.1.7 00:02:57.754 Program pytest-3 found: NO 00:02:57.754 Program flake8 found: NO 00:02:57.754 Program misspell-fixer found: NO 00:02:57.754 Program restructuredtext-lint found: NO 00:02:57.754 Program valgrind found: YES (/usr/bin/valgrind) 00:02:57.754 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.754 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.754 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.754 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:57.754 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:57.754 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:57.754 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:57.754 Build targets in project: 8 00:02:57.754 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:57.754 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:57.754 00:02:57.754 libvfio-user 0.0.1 00:02:57.754 00:02:57.754 User defined options 00:02:57.754 buildtype : debug 00:02:57.754 default_library: shared 00:02:57.754 libdir : /usr/local/lib 00:02:57.754 00:02:57.754 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:58.700 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:58.700 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:58.700 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:58.700 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:58.700 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:58.700 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:58.700 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:58.700 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:58.700 [8/37] Compiling C object samples/null.p/null.c.o 00:02:58.700 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:58.700 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:58.700 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:58.700 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:58.700 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:58.700 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:58.700 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:58.700 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:58.700 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:58.965 [18/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:58.965 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:58.965 [20/37] Compiling C object samples/server.p/server.c.o 00:02:58.965 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:58.965 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:58.965 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:58.965 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:58.965 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:58.965 [26/37] Compiling C object samples/client.p/client.c.o 00:02:58.965 [27/37] Linking target samples/client 00:02:58.965 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:58.965 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:59.227 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:59.227 [31/37] Linking target test/unit_tests 00:02:59.227 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:59.227 [33/37] Linking target samples/gpio-pci-idio-16 00:02:59.227 [34/37] Linking target samples/null 00:02:59.227 [35/37] Linking target samples/lspci 00:02:59.227 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:59.227 [37/37] Linking target samples/server 00:02:59.227 INFO: autodetecting backend as ninja 00:02:59.227 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:59.489 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:00.435 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:00.435 ninja: no work to do. 00:03:05.707 The Meson build system 00:03:05.707 Version: 1.5.0 00:03:05.707 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:05.707 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:05.707 Build type: native build 00:03:05.707 Program cat found: YES (/usr/bin/cat) 00:03:05.707 Project name: DPDK 00:03:05.707 Project version: 24.03.0 00:03:05.707 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:05.707 C linker for the host machine: cc ld.bfd 2.40-14 00:03:05.707 Host machine cpu family: x86_64 00:03:05.707 Host machine cpu: x86_64 00:03:05.707 Message: ## Building in Developer Mode ## 00:03:05.707 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:05.707 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:05.707 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:05.707 Program python3 found: YES (/usr/bin/python3) 00:03:05.707 Program cat found: YES (/usr/bin/cat) 00:03:05.707 Compiler for C supports arguments -march=native: YES 00:03:05.707 Checking for size of "void *" : 8 00:03:05.707 Checking for size of "void *" : 8 (cached) 00:03:05.707 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:05.707 Library m found: YES 00:03:05.707 Library numa found: YES 00:03:05.707 Has header "numaif.h" : YES 00:03:05.707 Library fdt found: NO 00:03:05.707 Library execinfo found: NO 00:03:05.707 Has header "execinfo.h" : YES 00:03:05.707 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:05.707 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:05.707 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:05.707 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:05.707 Run-time dependency openssl found: YES 3.1.1 00:03:05.707 Run-time dependency libpcap found: YES 1.10.4 00:03:05.707 Has header "pcap.h" with dependency libpcap: YES 00:03:05.708 Compiler for C supports arguments -Wcast-qual: YES 00:03:05.708 Compiler for C supports arguments -Wdeprecated: YES 00:03:05.708 Compiler for C supports arguments -Wformat: YES 00:03:05.708 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:05.708 Compiler for C supports arguments -Wformat-security: NO 00:03:05.708 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.708 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:05.708 Compiler for C supports arguments -Wnested-externs: YES 00:03:05.708 Compiler for C supports arguments -Wold-style-definition: YES 00:03:05.708 Compiler for C supports arguments -Wpointer-arith: YES 00:03:05.708 Compiler for C supports arguments -Wsign-compare: YES 00:03:05.708 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:05.708 Compiler for C supports arguments -Wundef: YES 00:03:05.708 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.708 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:05.708 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:05.708 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.708 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:05.708 Program objdump found: YES (/usr/bin/objdump) 00:03:05.708 Compiler for C supports arguments -mavx512f: YES 00:03:05.708 Checking if "AVX512 checking" compiles: YES 00:03:05.708 Fetching value of define "__SSE4_2__" : 1 00:03:05.708 Fetching value of define "__AES__" : 1 00:03:05.708 Fetching value of define "__AVX__" : 1 00:03:05.708 Fetching value of define "__AVX2__" : (undefined) 00:03:05.708 Fetching value of define "__AVX512BW__" : (undefined) 00:03:05.708 Fetching value of define "__AVX512CD__" : (undefined) 00:03:05.708 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:05.708 Fetching value of define "__AVX512F__" : (undefined) 00:03:05.708 Fetching value of define "__AVX512VL__" : (undefined) 00:03:05.708 Fetching value of define "__PCLMUL__" : 1 00:03:05.708 Fetching value of define "__RDRND__" : 1 00:03:05.708 Fetching value of define "__RDSEED__" : (undefined) 00:03:05.708 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:05.708 Fetching value of define "__znver1__" : (undefined) 00:03:05.708 Fetching value of define "__znver2__" : (undefined) 00:03:05.708 Fetching value of define "__znver3__" : (undefined) 00:03:05.708 Fetching value of define "__znver4__" : (undefined) 00:03:05.708 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:05.708 Message: lib/log: Defining dependency "log" 00:03:05.708 Message: lib/kvargs: Defining dependency "kvargs" 00:03:05.708 Message: lib/telemetry: Defining dependency "telemetry" 00:03:05.708 Checking for function "getentropy" : NO 00:03:05.708 Message: lib/eal: Defining dependency "eal" 00:03:05.708 Message: lib/ring: Defining dependency "ring" 00:03:05.708 Message: lib/rcu: Defining dependency "rcu" 00:03:05.708 Message: lib/mempool: Defining dependency "mempool" 00:03:05.708 Message: lib/mbuf: Defining dependency "mbuf" 00:03:05.708 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:05.708 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:05.708 Compiler for C supports arguments -mpclmul: YES 00:03:05.708 Compiler for C supports arguments -maes: YES 00:03:05.708 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:05.708 Compiler for C supports arguments -mavx512bw: YES 00:03:05.708 Compiler for C supports arguments -mavx512dq: YES 00:03:05.708 Compiler for C supports arguments -mavx512vl: YES 00:03:05.708 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:05.708 Compiler for C supports arguments -mavx2: YES 00:03:05.708 Compiler for C supports arguments -mavx: YES 00:03:05.708 Message: lib/net: Defining dependency "net" 00:03:05.708 Message: lib/meter: Defining dependency "meter" 00:03:05.708 Message: lib/ethdev: Defining dependency "ethdev" 00:03:05.708 Message: lib/pci: Defining dependency "pci" 00:03:05.708 Message: lib/cmdline: Defining dependency "cmdline" 00:03:05.708 Message: lib/hash: Defining dependency "hash" 00:03:05.708 Message: lib/timer: Defining dependency "timer" 00:03:05.708 Message: lib/compressdev: Defining dependency "compressdev" 00:03:05.708 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:05.708 Message: lib/dmadev: Defining dependency "dmadev" 00:03:05.708 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:05.708 Message: lib/power: Defining dependency "power" 00:03:05.708 Message: lib/reorder: Defining dependency "reorder" 00:03:05.708 Message: lib/security: Defining dependency "security" 00:03:05.708 Has header "linux/userfaultfd.h" : YES 00:03:05.708 Has header "linux/vduse.h" : YES 00:03:05.708 Message: lib/vhost: Defining dependency "vhost" 00:03:05.708 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:05.708 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:05.708 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:05.708 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:05.708 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:05.708 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:05.708 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:05.708 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:05.708 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:05.708 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:05.708 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:05.708 Configuring doxy-api-html.conf using configuration 00:03:05.708 Configuring doxy-api-man.conf using configuration 00:03:05.708 Program mandb found: YES (/usr/bin/mandb) 00:03:05.708 Program sphinx-build found: NO 00:03:05.708 Configuring rte_build_config.h using configuration 00:03:05.708 Message: 00:03:05.708 ================= 00:03:05.708 Applications Enabled 00:03:05.708 ================= 00:03:05.708 00:03:05.708 apps: 00:03:05.708 00:03:05.708 00:03:05.708 Message: 00:03:05.708 ================= 00:03:05.708 Libraries Enabled 00:03:05.708 ================= 00:03:05.708 00:03:05.708 libs: 00:03:05.708 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:05.708 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:05.708 cryptodev, dmadev, power, reorder, security, vhost, 00:03:05.708 00:03:05.708 Message: 00:03:05.708 =============== 00:03:05.708 Drivers Enabled 00:03:05.708 =============== 00:03:05.708 00:03:05.708 common: 00:03:05.708 00:03:05.708 bus: 00:03:05.708 pci, vdev, 00:03:05.708 mempool: 00:03:05.708 ring, 00:03:05.708 dma: 00:03:05.708 00:03:05.708 net: 00:03:05.708 00:03:05.708 crypto: 00:03:05.708 00:03:05.708 compress: 00:03:05.708 00:03:05.708 vdpa: 00:03:05.708 00:03:05.708 00:03:05.708 Message: 00:03:05.708 ================= 00:03:05.708 Content Skipped 00:03:05.708 ================= 00:03:05.708 00:03:05.708 apps: 00:03:05.708 dumpcap: explicitly disabled via build config 00:03:05.708 graph: explicitly disabled via build config 00:03:05.708 pdump: explicitly disabled via build config 00:03:05.708 proc-info: explicitly disabled via build config 00:03:05.708 test-acl: explicitly disabled via build config 00:03:05.708 test-bbdev: explicitly disabled via build config 00:03:05.708 test-cmdline: explicitly disabled via build config 00:03:05.708 test-compress-perf: explicitly disabled via build config 00:03:05.708 test-crypto-perf: explicitly disabled via build config 00:03:05.708 test-dma-perf: explicitly disabled via build config 00:03:05.708 test-eventdev: explicitly disabled via build config 00:03:05.708 test-fib: explicitly disabled via build config 00:03:05.708 test-flow-perf: explicitly disabled via build config 00:03:05.708 test-gpudev: explicitly disabled via build config 00:03:05.708 test-mldev: explicitly disabled via build config 00:03:05.708 test-pipeline: explicitly disabled via build config 00:03:05.708 test-pmd: explicitly disabled via build config 00:03:05.708 test-regex: explicitly disabled via build config 00:03:05.708 test-sad: explicitly disabled via build config 00:03:05.708 test-security-perf: explicitly disabled via build config 00:03:05.708 00:03:05.708 libs: 00:03:05.708 argparse: explicitly disabled via build config 00:03:05.708 metrics: explicitly disabled via build config 00:03:05.708 acl: explicitly disabled via build config 00:03:05.708 bbdev: explicitly disabled via build config 00:03:05.708 bitratestats: explicitly disabled via build config 00:03:05.708 bpf: explicitly disabled via build config 00:03:05.708 cfgfile: explicitly disabled via build config 00:03:05.708 distributor: explicitly disabled via build config 00:03:05.708 efd: explicitly disabled via build config 00:03:05.708 eventdev: explicitly disabled via build config 00:03:05.708 dispatcher: explicitly disabled via build config 00:03:05.708 gpudev: explicitly disabled via build config 00:03:05.708 gro: explicitly disabled via build config 00:03:05.708 gso: explicitly disabled via build config 00:03:05.708 ip_frag: explicitly disabled via build config 00:03:05.708 jobstats: explicitly disabled via build config 00:03:05.708 latencystats: explicitly disabled via build config 00:03:05.708 lpm: explicitly disabled via build config 00:03:05.708 member: explicitly disabled via build config 00:03:05.708 pcapng: explicitly disabled via build config 00:03:05.708 rawdev: explicitly disabled via build config 00:03:05.708 regexdev: explicitly disabled via build config 00:03:05.708 mldev: explicitly disabled via build config 00:03:05.708 rib: explicitly disabled via build config 00:03:05.708 sched: explicitly disabled via build config 00:03:05.708 stack: explicitly disabled via build config 00:03:05.708 ipsec: explicitly disabled via build config 00:03:05.708 pdcp: explicitly disabled via build config 00:03:05.708 fib: explicitly disabled via build config 00:03:05.708 port: explicitly disabled via build config 00:03:05.708 pdump: explicitly disabled via build config 00:03:05.708 table: explicitly disabled via build config 00:03:05.708 pipeline: explicitly disabled via build config 00:03:05.709 graph: explicitly disabled via build config 00:03:05.709 node: explicitly disabled via build config 00:03:05.709 00:03:05.709 drivers: 00:03:05.709 common/cpt: not in enabled drivers build config 00:03:05.709 common/dpaax: not in enabled drivers build config 00:03:05.709 common/iavf: not in enabled drivers build config 00:03:05.709 common/idpf: not in enabled drivers build config 00:03:05.709 common/ionic: not in enabled drivers build config 00:03:05.709 common/mvep: not in enabled drivers build config 00:03:05.709 common/octeontx: not in enabled drivers build config 00:03:05.709 bus/auxiliary: not in enabled drivers build config 00:03:05.709 bus/cdx: not in enabled drivers build config 00:03:05.709 bus/dpaa: not in enabled drivers build config 00:03:05.709 bus/fslmc: not in enabled drivers build config 00:03:05.709 bus/ifpga: not in enabled drivers build config 00:03:05.709 bus/platform: not in enabled drivers build config 00:03:05.709 bus/uacce: not in enabled drivers build config 00:03:05.709 bus/vmbus: not in enabled drivers build config 00:03:05.709 common/cnxk: not in enabled drivers build config 00:03:05.709 common/mlx5: not in enabled drivers build config 00:03:05.709 common/nfp: not in enabled drivers build config 00:03:05.709 common/nitrox: not in enabled drivers build config 00:03:05.709 common/qat: not in enabled drivers build config 00:03:05.709 common/sfc_efx: not in enabled drivers build config 00:03:05.709 mempool/bucket: not in enabled drivers build config 00:03:05.709 mempool/cnxk: not in enabled drivers build config 00:03:05.709 mempool/dpaa: not in enabled drivers build config 00:03:05.709 mempool/dpaa2: not in enabled drivers build config 00:03:05.709 mempool/octeontx: not in enabled drivers build config 00:03:05.709 mempool/stack: not in enabled drivers build config 00:03:05.709 dma/cnxk: not in enabled drivers build config 00:03:05.709 dma/dpaa: not in enabled drivers build config 00:03:05.709 dma/dpaa2: not in enabled drivers build config 00:03:05.709 dma/hisilicon: not in enabled drivers build config 00:03:05.709 dma/idxd: not in enabled drivers build config 00:03:05.709 dma/ioat: not in enabled drivers build config 00:03:05.709 dma/skeleton: not in enabled drivers build config 00:03:05.709 net/af_packet: not in enabled drivers build config 00:03:05.709 net/af_xdp: not in enabled drivers build config 00:03:05.709 net/ark: not in enabled drivers build config 00:03:05.709 net/atlantic: not in enabled drivers build config 00:03:05.709 net/avp: not in enabled drivers build config 00:03:05.709 net/axgbe: not in enabled drivers build config 00:03:05.709 net/bnx2x: not in enabled drivers build config 00:03:05.709 net/bnxt: not in enabled drivers build config 00:03:05.709 net/bonding: not in enabled drivers build config 00:03:05.709 net/cnxk: not in enabled drivers build config 00:03:05.709 net/cpfl: not in enabled drivers build config 00:03:05.709 net/cxgbe: not in enabled drivers build config 00:03:05.709 net/dpaa: not in enabled drivers build config 00:03:05.709 net/dpaa2: not in enabled drivers build config 00:03:05.709 net/e1000: not in enabled drivers build config 00:03:05.709 net/ena: not in enabled drivers build config 00:03:05.709 net/enetc: not in enabled drivers build config 00:03:05.709 net/enetfec: not in enabled drivers build config 00:03:05.709 net/enic: not in enabled drivers build config 00:03:05.709 net/failsafe: not in enabled drivers build config 00:03:05.709 net/fm10k: not in enabled drivers build config 00:03:05.709 net/gve: not in enabled drivers build config 00:03:05.709 net/hinic: not in enabled drivers build config 00:03:05.709 net/hns3: not in enabled drivers build config 00:03:05.709 net/i40e: not in enabled drivers build config 00:03:05.709 net/iavf: not in enabled drivers build config 00:03:05.709 net/ice: not in enabled drivers build config 00:03:05.709 net/idpf: not in enabled drivers build config 00:03:05.709 net/igc: not in enabled drivers build config 00:03:05.709 net/ionic: not in enabled drivers build config 00:03:05.709 net/ipn3ke: not in enabled drivers build config 00:03:05.709 net/ixgbe: not in enabled drivers build config 00:03:05.709 net/mana: not in enabled drivers build config 00:03:05.709 net/memif: not in enabled drivers build config 00:03:05.709 net/mlx4: not in enabled drivers build config 00:03:05.709 net/mlx5: not in enabled drivers build config 00:03:05.709 net/mvneta: not in enabled drivers build config 00:03:05.709 net/mvpp2: not in enabled drivers build config 00:03:05.709 net/netvsc: not in enabled drivers build config 00:03:05.709 net/nfb: not in enabled drivers build config 00:03:05.709 net/nfp: not in enabled drivers build config 00:03:05.709 net/ngbe: not in enabled drivers build config 00:03:05.709 net/null: not in enabled drivers build config 00:03:05.709 net/octeontx: not in enabled drivers build config 00:03:05.709 net/octeon_ep: not in enabled drivers build config 00:03:05.709 net/pcap: not in enabled drivers build config 00:03:05.709 net/pfe: not in enabled drivers build config 00:03:05.709 net/qede: not in enabled drivers build config 00:03:05.709 net/ring: not in enabled drivers build config 00:03:05.709 net/sfc: not in enabled drivers build config 00:03:05.709 net/softnic: not in enabled drivers build config 00:03:05.709 net/tap: not in enabled drivers build config 00:03:05.709 net/thunderx: not in enabled drivers build config 00:03:05.709 net/txgbe: not in enabled drivers build config 00:03:05.709 net/vdev_netvsc: not in enabled drivers build config 00:03:05.709 net/vhost: not in enabled drivers build config 00:03:05.709 net/virtio: not in enabled drivers build config 00:03:05.709 net/vmxnet3: not in enabled drivers build config 00:03:05.709 raw/*: missing internal dependency, "rawdev" 00:03:05.709 crypto/armv8: not in enabled drivers build config 00:03:05.709 crypto/bcmfs: not in enabled drivers build config 00:03:05.709 crypto/caam_jr: not in enabled drivers build config 00:03:05.709 crypto/ccp: not in enabled drivers build config 00:03:05.709 crypto/cnxk: not in enabled drivers build config 00:03:05.709 crypto/dpaa_sec: not in enabled drivers build config 00:03:05.709 crypto/dpaa2_sec: not in enabled drivers build config 00:03:05.709 crypto/ipsec_mb: not in enabled drivers build config 00:03:05.709 crypto/mlx5: not in enabled drivers build config 00:03:05.709 crypto/mvsam: not in enabled drivers build config 00:03:05.709 crypto/nitrox: not in enabled drivers build config 00:03:05.709 crypto/null: not in enabled drivers build config 00:03:05.709 crypto/octeontx: not in enabled drivers build config 00:03:05.709 crypto/openssl: not in enabled drivers build config 00:03:05.709 crypto/scheduler: not in enabled drivers build config 00:03:05.709 crypto/uadk: not in enabled drivers build config 00:03:05.709 crypto/virtio: not in enabled drivers build config 00:03:05.709 compress/isal: not in enabled drivers build config 00:03:05.709 compress/mlx5: not in enabled drivers build config 00:03:05.709 compress/nitrox: not in enabled drivers build config 00:03:05.709 compress/octeontx: not in enabled drivers build config 00:03:05.709 compress/zlib: not in enabled drivers build config 00:03:05.709 regex/*: missing internal dependency, "regexdev" 00:03:05.709 ml/*: missing internal dependency, "mldev" 00:03:05.709 vdpa/ifc: not in enabled drivers build config 00:03:05.709 vdpa/mlx5: not in enabled drivers build config 00:03:05.709 vdpa/nfp: not in enabled drivers build config 00:03:05.709 vdpa/sfc: not in enabled drivers build config 00:03:05.709 event/*: missing internal dependency, "eventdev" 00:03:05.709 baseband/*: missing internal dependency, "bbdev" 00:03:05.709 gpu/*: missing internal dependency, "gpudev" 00:03:05.709 00:03:05.709 00:03:05.709 Build targets in project: 85 00:03:05.709 00:03:05.709 DPDK 24.03.0 00:03:05.709 00:03:05.709 User defined options 00:03:05.709 buildtype : debug 00:03:05.709 default_library : shared 00:03:05.709 libdir : lib 00:03:05.709 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:05.709 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:05.709 c_link_args : 00:03:05.709 cpu_instruction_set: native 00:03:05.709 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:05.709 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:05.709 enable_docs : false 00:03:05.709 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:05.709 enable_kmods : false 00:03:05.709 max_lcores : 128 00:03:05.709 tests : false 00:03:05.709 00:03:05.709 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.709 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:05.970 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:05.970 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:05.970 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:05.970 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:05.970 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:05.970 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:05.970 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:05.970 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:05.970 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:05.970 [10/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:05.970 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:05.970 [12/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:05.970 [13/268] Linking static target lib/librte_kvargs.a 00:03:05.970 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:05.970 [15/268] Linking static target lib/librte_log.a 00:03:05.970 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:06.918 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:06.918 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:06.918 [19/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.918 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:06.918 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:06.918 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:06.918 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:06.918 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:06.918 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:06.918 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.918 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:06.918 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:06.918 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:06.918 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:06.918 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.918 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:06.918 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:06.918 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:06.918 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:06.918 [36/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:06.918 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:06.918 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:06.918 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:06.918 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:06.918 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:06.918 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:06.918 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:06.918 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:06.918 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:06.918 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:06.918 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:06.918 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:06.918 [49/268] Linking static target lib/librte_telemetry.a 00:03:06.918 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:06.918 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:06.918 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:06.918 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:06.918 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:06.918 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:06.918 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:06.918 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:06.918 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:06.919 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:06.919 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:06.919 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:07.179 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:07.179 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:07.179 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:07.179 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:07.179 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:07.179 [67/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.441 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:07.441 [69/268] Linking static target lib/librte_pci.a 00:03:07.441 [70/268] Linking target lib/librte_log.so.24.1 00:03:07.441 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:07.441 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:07.702 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:07.702 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:07.702 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:07.702 [76/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:07.702 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:07.702 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:07.702 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:07.702 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:07.702 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:07.702 [82/268] Linking target lib/librte_kvargs.so.24.1 00:03:07.702 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:07.702 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:07.702 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:07.702 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:07.702 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:07.702 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:07.702 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:07.702 [90/268] Linking static target lib/librte_ring.a 00:03:07.964 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:07.964 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:07.964 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:07.964 [94/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.964 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:07.964 [96/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:07.964 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:07.964 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:07.964 [99/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:07.964 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:07.964 [101/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.964 [102/268] Linking static target lib/librte_meter.a 00:03:07.964 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:07.964 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:07.964 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:07.964 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:07.964 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:07.964 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:07.964 [109/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:07.964 [110/268] Linking target lib/librte_telemetry.so.24.1 00:03:07.964 [111/268] Linking static target lib/librte_eal.a 00:03:07.964 [112/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:07.964 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:07.964 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:07.964 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:07.964 [116/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:07.964 [117/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:07.964 [118/268] Linking static target lib/librte_rcu.a 00:03:07.964 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:07.964 [120/268] Linking static target lib/librte_mempool.a 00:03:07.964 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:07.964 [122/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:08.227 [123/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:08.227 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:08.227 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:08.227 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:08.227 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:08.227 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:08.227 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:08.227 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:08.227 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:08.227 [132/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:08.491 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:08.491 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.491 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.491 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:08.491 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:08.491 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:08.491 [139/268] Linking static target lib/librte_net.a 00:03:08.753 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:08.753 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:08.753 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:08.753 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:08.753 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:08.753 [145/268] Linking static target lib/librte_cmdline.a 00:03:08.753 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.753 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:08.753 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:08.753 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:09.015 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:09.015 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:09.015 [152/268] Linking static target lib/librte_timer.a 00:03:09.015 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:09.015 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:09.015 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:09.015 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:09.015 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:09.015 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.015 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:09.015 [160/268] Linking static target lib/librte_dmadev.a 00:03:09.015 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:09.015 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:09.273 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:09.273 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:09.273 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:09.273 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:09.273 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.273 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:09.273 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:09.273 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:09.273 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:09.273 [172/268] Linking static target lib/librte_power.a 00:03:09.273 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:09.273 [174/268] Linking static target lib/librte_compressdev.a 00:03:09.273 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.274 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:09.274 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:09.274 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:09.532 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:09.532 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:09.532 [181/268] Linking static target lib/librte_reorder.a 00:03:09.532 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:09.532 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:09.532 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:09.532 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:09.532 [186/268] Linking static target lib/librte_hash.a 00:03:09.532 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.532 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.532 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:09.532 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:09.532 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.790 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:09.790 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:09.790 [194/268] Linking static target lib/librte_mbuf.a 00:03:09.790 [195/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:09.790 [196/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:09.790 [197/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.790 [198/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.790 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:09.790 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.790 [201/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.790 [202/268] Linking static target drivers/librte_bus_pci.a 00:03:09.790 [203/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:09.790 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:09.790 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.790 [206/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:09.790 [207/268] Linking static target lib/librte_security.a 00:03:10.048 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:10.048 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:10.048 [210/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:10.048 [211/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.048 [212/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.048 [213/268] Linking static target drivers/librte_bus_vdev.a 00:03:10.048 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.048 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:10.048 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.048 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.048 [218/268] Linking static target drivers/librte_mempool_ring.a 00:03:10.048 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:10.048 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.048 [221/268] Linking static target lib/librte_ethdev.a 00:03:10.306 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.306 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.306 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.306 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:10.306 [226/268] Linking static target lib/librte_cryptodev.a 00:03:11.680 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.613 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:14.512 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.512 [230/268] Linking target lib/librte_eal.so.24.1 00:03:14.512 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.512 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:14.512 [233/268] Linking target lib/librte_pci.so.24.1 00:03:14.512 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:14.512 [235/268] Linking target lib/librte_meter.so.24.1 00:03:14.512 [236/268] Linking target lib/librte_ring.so.24.1 00:03:14.512 [237/268] Linking target lib/librte_timer.so.24.1 00:03:14.512 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:14.769 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:14.769 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:14.769 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:14.769 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:14.769 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:14.769 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:14.769 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:14.769 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:15.027 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:15.027 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:15.027 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:15.027 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:15.027 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:15.027 [252/268] Linking target lib/librte_compressdev.so.24.1 00:03:15.027 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:15.027 [254/268] Linking target lib/librte_net.so.24.1 00:03:15.027 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:15.284 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:15.285 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:15.285 [258/268] Linking target lib/librte_hash.so.24.1 00:03:15.285 [259/268] Linking target lib/librte_security.so.24.1 00:03:15.285 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:15.285 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:15.543 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:15.543 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:15.543 [264/268] Linking target lib/librte_power.so.24.1 00:03:18.824 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:18.824 [266/268] Linking static target lib/librte_vhost.a 00:03:19.389 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.389 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:19.389 INFO: autodetecting backend as ninja 00:03:19.389 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:41.313 CC lib/ut_mock/mock.o 00:03:41.313 CC lib/log/log.o 00:03:41.313 CC lib/log/log_flags.o 00:03:41.313 CC lib/log/log_deprecated.o 00:03:41.313 CC lib/ut/ut.o 00:03:41.313 LIB libspdk_ut_mock.a 00:03:41.313 LIB libspdk_ut.a 00:03:41.313 LIB libspdk_log.a 00:03:41.313 SO libspdk_ut_mock.so.6.0 00:03:41.313 SO libspdk_ut.so.2.0 00:03:41.313 SO libspdk_log.so.7.1 00:03:41.313 SYMLINK libspdk_ut_mock.so 00:03:41.313 SYMLINK libspdk_ut.so 00:03:41.313 SYMLINK libspdk_log.so 00:03:41.313 CC lib/ioat/ioat.o 00:03:41.313 CXX lib/trace_parser/trace.o 00:03:41.313 CC lib/util/base64.o 00:03:41.313 CC lib/util/bit_array.o 00:03:41.313 CC lib/dma/dma.o 00:03:41.313 CC lib/util/cpuset.o 00:03:41.313 CC lib/util/crc16.o 00:03:41.313 CC lib/util/crc32.o 00:03:41.313 CC lib/util/crc32c.o 00:03:41.313 CC lib/util/crc32_ieee.o 00:03:41.313 CC lib/util/crc64.o 00:03:41.313 CC lib/util/dif.o 00:03:41.313 CC lib/util/fd.o 00:03:41.313 CC lib/util/fd_group.o 00:03:41.313 CC lib/util/file.o 00:03:41.313 CC lib/util/hexlify.o 00:03:41.313 CC lib/util/iov.o 00:03:41.313 CC lib/util/math.o 00:03:41.313 CC lib/util/pipe.o 00:03:41.313 CC lib/util/net.o 00:03:41.313 CC lib/util/strerror_tls.o 00:03:41.313 CC lib/util/string.o 00:03:41.313 CC lib/util/uuid.o 00:03:41.313 CC lib/util/xor.o 00:03:41.313 CC lib/util/zipf.o 00:03:41.313 CC lib/util/md5.o 00:03:41.313 CC lib/vfio_user/host/vfio_user_pci.o 00:03:41.313 CC lib/vfio_user/host/vfio_user.o 00:03:41.313 LIB libspdk_dma.a 00:03:41.313 SO libspdk_dma.so.5.0 00:03:41.313 SYMLINK libspdk_dma.so 00:03:41.313 LIB libspdk_ioat.a 00:03:41.313 SO libspdk_ioat.so.7.0 00:03:41.313 LIB libspdk_vfio_user.a 00:03:41.313 SYMLINK libspdk_ioat.so 00:03:41.313 SO libspdk_vfio_user.so.5.0 00:03:41.313 SYMLINK libspdk_vfio_user.so 00:03:41.313 LIB libspdk_util.a 00:03:41.313 SO libspdk_util.so.10.1 00:03:41.313 SYMLINK libspdk_util.so 00:03:41.313 CC lib/rdma_utils/rdma_utils.o 00:03:41.313 CC lib/vmd/vmd.o 00:03:41.313 CC lib/vmd/led.o 00:03:41.313 CC lib/conf/conf.o 00:03:41.313 CC lib/json/json_parse.o 00:03:41.313 CC lib/env_dpdk/env.o 00:03:41.313 CC lib/idxd/idxd.o 00:03:41.313 CC lib/json/json_util.o 00:03:41.313 CC lib/env_dpdk/memory.o 00:03:41.313 CC lib/idxd/idxd_user.o 00:03:41.313 CC lib/env_dpdk/pci.o 00:03:41.313 CC lib/json/json_write.o 00:03:41.313 CC lib/idxd/idxd_kernel.o 00:03:41.313 CC lib/env_dpdk/init.o 00:03:41.313 CC lib/env_dpdk/threads.o 00:03:41.313 CC lib/env_dpdk/pci_ioat.o 00:03:41.313 CC lib/env_dpdk/pci_virtio.o 00:03:41.313 CC lib/env_dpdk/pci_vmd.o 00:03:41.313 CC lib/env_dpdk/pci_idxd.o 00:03:41.313 CC lib/env_dpdk/pci_event.o 00:03:41.313 CC lib/env_dpdk/sigbus_handler.o 00:03:41.313 CC lib/env_dpdk/pci_dpdk.o 00:03:41.313 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:41.313 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:41.313 LIB libspdk_trace_parser.a 00:03:41.313 SO libspdk_trace_parser.so.6.0 00:03:41.313 SYMLINK libspdk_trace_parser.so 00:03:41.313 LIB libspdk_conf.a 00:03:41.313 SO libspdk_conf.so.6.0 00:03:41.313 SYMLINK libspdk_conf.so 00:03:41.313 LIB libspdk_json.a 00:03:41.313 SO libspdk_json.so.6.0 00:03:41.313 LIB libspdk_rdma_utils.a 00:03:41.313 SYMLINK libspdk_json.so 00:03:41.313 SO libspdk_rdma_utils.so.1.0 00:03:41.313 SYMLINK libspdk_rdma_utils.so 00:03:41.313 CC lib/jsonrpc/jsonrpc_server.o 00:03:41.313 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:41.313 CC lib/jsonrpc/jsonrpc_client.o 00:03:41.313 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:41.313 CC lib/rdma_provider/common.o 00:03:41.313 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:41.313 LIB libspdk_idxd.a 00:03:41.313 SO libspdk_idxd.so.12.1 00:03:41.313 LIB libspdk_vmd.a 00:03:41.313 SYMLINK libspdk_idxd.so 00:03:41.313 SO libspdk_vmd.so.6.0 00:03:41.313 SYMLINK libspdk_vmd.so 00:03:41.313 LIB libspdk_jsonrpc.a 00:03:41.313 LIB libspdk_rdma_provider.a 00:03:41.313 SO libspdk_jsonrpc.so.6.0 00:03:41.313 SO libspdk_rdma_provider.so.7.0 00:03:41.313 SYMLINK libspdk_jsonrpc.so 00:03:41.313 SYMLINK libspdk_rdma_provider.so 00:03:41.313 CC lib/rpc/rpc.o 00:03:41.313 LIB libspdk_rpc.a 00:03:41.313 SO libspdk_rpc.so.6.0 00:03:41.313 SYMLINK libspdk_rpc.so 00:03:41.572 CC lib/notify/notify.o 00:03:41.572 CC lib/keyring/keyring.o 00:03:41.572 CC lib/trace/trace.o 00:03:41.572 CC lib/keyring/keyring_rpc.o 00:03:41.572 CC lib/notify/notify_rpc.o 00:03:41.572 CC lib/trace/trace_flags.o 00:03:41.572 CC lib/trace/trace_rpc.o 00:03:41.830 LIB libspdk_notify.a 00:03:41.830 SO libspdk_notify.so.6.0 00:03:41.830 SYMLINK libspdk_notify.so 00:03:41.830 LIB libspdk_keyring.a 00:03:41.830 LIB libspdk_trace.a 00:03:41.830 SO libspdk_keyring.so.2.0 00:03:41.830 SO libspdk_trace.so.11.0 00:03:41.830 SYMLINK libspdk_keyring.so 00:03:41.830 SYMLINK libspdk_trace.so 00:03:42.088 LIB libspdk_env_dpdk.a 00:03:42.088 CC lib/sock/sock.o 00:03:42.088 CC lib/sock/sock_rpc.o 00:03:42.088 CC lib/thread/thread.o 00:03:42.088 CC lib/thread/iobuf.o 00:03:42.088 SO libspdk_env_dpdk.so.15.1 00:03:42.346 SYMLINK libspdk_env_dpdk.so 00:03:42.346 LIB libspdk_sock.a 00:03:42.640 SO libspdk_sock.so.10.0 00:03:42.640 SYMLINK libspdk_sock.so 00:03:42.640 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:42.640 CC lib/nvme/nvme_ctrlr.o 00:03:42.640 CC lib/nvme/nvme_fabric.o 00:03:42.640 CC lib/nvme/nvme_ns_cmd.o 00:03:42.640 CC lib/nvme/nvme_ns.o 00:03:42.640 CC lib/nvme/nvme_pcie_common.o 00:03:42.640 CC lib/nvme/nvme_pcie.o 00:03:42.640 CC lib/nvme/nvme_qpair.o 00:03:42.640 CC lib/nvme/nvme.o 00:03:42.640 CC lib/nvme/nvme_quirks.o 00:03:42.640 CC lib/nvme/nvme_transport.o 00:03:42.640 CC lib/nvme/nvme_discovery.o 00:03:42.640 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:42.640 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:42.640 CC lib/nvme/nvme_tcp.o 00:03:42.640 CC lib/nvme/nvme_opal.o 00:03:42.640 CC lib/nvme/nvme_io_msg.o 00:03:42.640 CC lib/nvme/nvme_poll_group.o 00:03:42.640 CC lib/nvme/nvme_zns.o 00:03:42.640 CC lib/nvme/nvme_stubs.o 00:03:42.640 CC lib/nvme/nvme_auth.o 00:03:42.640 CC lib/nvme/nvme_cuse.o 00:03:42.640 CC lib/nvme/nvme_vfio_user.o 00:03:42.640 CC lib/nvme/nvme_rdma.o 00:03:43.667 LIB libspdk_thread.a 00:03:43.667 SO libspdk_thread.so.11.0 00:03:43.667 SYMLINK libspdk_thread.so 00:03:43.924 CC lib/virtio/virtio.o 00:03:43.924 CC lib/init/json_config.o 00:03:43.924 CC lib/accel/accel.o 00:03:43.924 CC lib/vfu_tgt/tgt_endpoint.o 00:03:43.924 CC lib/virtio/virtio_vhost_user.o 00:03:43.924 CC lib/blob/blobstore.o 00:03:43.924 CC lib/fsdev/fsdev.o 00:03:43.924 CC lib/init/subsystem.o 00:03:43.924 CC lib/virtio/virtio_vfio_user.o 00:03:43.924 CC lib/vfu_tgt/tgt_rpc.o 00:03:43.924 CC lib/blob/request.o 00:03:43.924 CC lib/accel/accel_rpc.o 00:03:43.924 CC lib/virtio/virtio_pci.o 00:03:43.924 CC lib/fsdev/fsdev_io.o 00:03:43.924 CC lib/init/subsystem_rpc.o 00:03:43.924 CC lib/blob/zeroes.o 00:03:43.924 CC lib/fsdev/fsdev_rpc.o 00:03:43.924 CC lib/accel/accel_sw.o 00:03:43.924 CC lib/blob/blob_bs_dev.o 00:03:43.924 CC lib/init/rpc.o 00:03:44.182 LIB libspdk_init.a 00:03:44.182 SO libspdk_init.so.6.0 00:03:44.182 LIB libspdk_virtio.a 00:03:44.439 SO libspdk_virtio.so.7.0 00:03:44.439 SYMLINK libspdk_init.so 00:03:44.439 LIB libspdk_vfu_tgt.a 00:03:44.439 SO libspdk_vfu_tgt.so.3.0 00:03:44.439 SYMLINK libspdk_virtio.so 00:03:44.439 SYMLINK libspdk_vfu_tgt.so 00:03:44.439 CC lib/event/app.o 00:03:44.439 CC lib/event/reactor.o 00:03:44.439 CC lib/event/log_rpc.o 00:03:44.439 CC lib/event/app_rpc.o 00:03:44.439 CC lib/event/scheduler_static.o 00:03:44.695 LIB libspdk_fsdev.a 00:03:44.695 SO libspdk_fsdev.so.2.0 00:03:44.695 SYMLINK libspdk_fsdev.so 00:03:44.953 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:44.953 LIB libspdk_event.a 00:03:44.953 SO libspdk_event.so.14.0 00:03:44.953 SYMLINK libspdk_event.so 00:03:45.210 LIB libspdk_accel.a 00:03:45.210 SO libspdk_accel.so.16.0 00:03:45.210 SYMLINK libspdk_accel.so 00:03:45.210 LIB libspdk_nvme.a 00:03:45.468 SO libspdk_nvme.so.15.0 00:03:45.468 CC lib/bdev/bdev.o 00:03:45.468 CC lib/bdev/bdev_rpc.o 00:03:45.468 CC lib/bdev/bdev_zone.o 00:03:45.468 CC lib/bdev/part.o 00:03:45.468 CC lib/bdev/scsi_nvme.o 00:03:45.468 LIB libspdk_fuse_dispatcher.a 00:03:45.468 SYMLINK libspdk_nvme.so 00:03:45.468 SO libspdk_fuse_dispatcher.so.1.0 00:03:45.727 SYMLINK libspdk_fuse_dispatcher.so 00:03:47.102 LIB libspdk_blob.a 00:03:47.102 SO libspdk_blob.so.11.0 00:03:47.102 SYMLINK libspdk_blob.so 00:03:47.360 CC lib/lvol/lvol.o 00:03:47.360 CC lib/blobfs/blobfs.o 00:03:47.360 CC lib/blobfs/tree.o 00:03:47.926 LIB libspdk_bdev.a 00:03:48.185 SO libspdk_bdev.so.17.0 00:03:48.185 SYMLINK libspdk_bdev.so 00:03:48.185 LIB libspdk_blobfs.a 00:03:48.185 SO libspdk_blobfs.so.10.0 00:03:48.185 SYMLINK libspdk_blobfs.so 00:03:48.185 LIB libspdk_lvol.a 00:03:48.450 CC lib/ublk/ublk.o 00:03:48.450 CC lib/nvmf/ctrlr.o 00:03:48.450 CC lib/ublk/ublk_rpc.o 00:03:48.450 CC lib/nbd/nbd.o 00:03:48.450 CC lib/scsi/dev.o 00:03:48.450 CC lib/nvmf/ctrlr_discovery.o 00:03:48.450 CC lib/nbd/nbd_rpc.o 00:03:48.450 CC lib/ftl/ftl_core.o 00:03:48.450 CC lib/nvmf/ctrlr_bdev.o 00:03:48.450 CC lib/scsi/lun.o 00:03:48.450 CC lib/ftl/ftl_init.o 00:03:48.450 CC lib/nvmf/subsystem.o 00:03:48.450 CC lib/scsi/port.o 00:03:48.450 CC lib/ftl/ftl_layout.o 00:03:48.450 CC lib/nvmf/nvmf.o 00:03:48.450 CC lib/scsi/scsi.o 00:03:48.450 CC lib/ftl/ftl_debug.o 00:03:48.450 CC lib/nvmf/nvmf_rpc.o 00:03:48.450 CC lib/scsi/scsi_bdev.o 00:03:48.450 CC lib/ftl/ftl_sb.o 00:03:48.450 CC lib/nvmf/transport.o 00:03:48.450 CC lib/scsi/scsi_pr.o 00:03:48.450 CC lib/ftl/ftl_io.o 00:03:48.450 CC lib/nvmf/tcp.o 00:03:48.450 CC lib/nvmf/stubs.o 00:03:48.450 CC lib/scsi/scsi_rpc.o 00:03:48.450 CC lib/ftl/ftl_l2p.o 00:03:48.450 CC lib/nvmf/mdns_server.o 00:03:48.450 CC lib/scsi/task.o 00:03:48.450 CC lib/ftl/ftl_l2p_flat.o 00:03:48.450 CC lib/ftl/ftl_nv_cache.o 00:03:48.450 CC lib/nvmf/vfio_user.o 00:03:48.450 CC lib/ftl/ftl_band.o 00:03:48.450 CC lib/nvmf/rdma.o 00:03:48.450 CC lib/ftl/ftl_band_ops.o 00:03:48.450 CC lib/ftl/ftl_writer.o 00:03:48.450 CC lib/nvmf/auth.o 00:03:48.450 CC lib/ftl/ftl_rq.o 00:03:48.450 CC lib/ftl/ftl_reloc.o 00:03:48.450 CC lib/ftl/ftl_l2p_cache.o 00:03:48.450 CC lib/ftl/ftl_p2l.o 00:03:48.450 CC lib/ftl/ftl_p2l_log.o 00:03:48.450 CC lib/ftl/mngt/ftl_mngt.o 00:03:48.450 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:48.450 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:48.450 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:48.450 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:48.450 SO libspdk_lvol.so.10.0 00:03:48.450 SYMLINK libspdk_lvol.so 00:03:48.450 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:48.711 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:48.711 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:48.711 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:48.711 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:48.711 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:48.711 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:48.711 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:48.711 CC lib/ftl/utils/ftl_conf.o 00:03:48.711 CC lib/ftl/utils/ftl_md.o 00:03:48.711 CC lib/ftl/utils/ftl_mempool.o 00:03:48.711 CC lib/ftl/utils/ftl_bitmap.o 00:03:48.711 CC lib/ftl/utils/ftl_property.o 00:03:48.711 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:48.974 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:48.974 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:48.974 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:48.974 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:48.974 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:48.974 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:48.974 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:48.974 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:48.974 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:48.974 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:48.974 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:48.974 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:48.974 CC lib/ftl/base/ftl_base_dev.o 00:03:48.974 CC lib/ftl/base/ftl_base_bdev.o 00:03:48.974 CC lib/ftl/ftl_trace.o 00:03:49.232 LIB libspdk_nbd.a 00:03:49.232 SO libspdk_nbd.so.7.0 00:03:49.232 LIB libspdk_scsi.a 00:03:49.232 SO libspdk_scsi.so.9.0 00:03:49.232 SYMLINK libspdk_nbd.so 00:03:49.490 SYMLINK libspdk_scsi.so 00:03:49.490 LIB libspdk_ublk.a 00:03:49.490 SO libspdk_ublk.so.3.0 00:03:49.490 CC lib/vhost/vhost.o 00:03:49.490 CC lib/iscsi/conn.o 00:03:49.490 CC lib/vhost/vhost_rpc.o 00:03:49.490 CC lib/iscsi/init_grp.o 00:03:49.490 CC lib/vhost/vhost_scsi.o 00:03:49.490 CC lib/iscsi/iscsi.o 00:03:49.490 CC lib/vhost/vhost_blk.o 00:03:49.490 CC lib/iscsi/param.o 00:03:49.490 CC lib/vhost/rte_vhost_user.o 00:03:49.490 CC lib/iscsi/portal_grp.o 00:03:49.490 CC lib/iscsi/tgt_node.o 00:03:49.490 CC lib/iscsi/iscsi_subsystem.o 00:03:49.490 CC lib/iscsi/iscsi_rpc.o 00:03:49.490 CC lib/iscsi/task.o 00:03:49.490 SYMLINK libspdk_ublk.so 00:03:49.748 LIB libspdk_ftl.a 00:03:50.007 SO libspdk_ftl.so.9.0 00:03:50.265 SYMLINK libspdk_ftl.so 00:03:50.833 LIB libspdk_vhost.a 00:03:50.833 SO libspdk_vhost.so.8.0 00:03:50.833 SYMLINK libspdk_vhost.so 00:03:51.091 LIB libspdk_nvmf.a 00:03:51.091 LIB libspdk_iscsi.a 00:03:51.091 SO libspdk_nvmf.so.20.0 00:03:51.091 SO libspdk_iscsi.so.8.0 00:03:51.091 SYMLINK libspdk_iscsi.so 00:03:51.348 SYMLINK libspdk_nvmf.so 00:03:51.607 CC module/env_dpdk/env_dpdk_rpc.o 00:03:51.607 CC module/vfu_device/vfu_virtio.o 00:03:51.607 CC module/vfu_device/vfu_virtio_blk.o 00:03:51.607 CC module/vfu_device/vfu_virtio_scsi.o 00:03:51.607 CC module/vfu_device/vfu_virtio_rpc.o 00:03:51.607 CC module/vfu_device/vfu_virtio_fs.o 00:03:51.607 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:51.607 CC module/sock/posix/posix.o 00:03:51.607 CC module/accel/dsa/accel_dsa.o 00:03:51.607 CC module/keyring/file/keyring.o 00:03:51.607 CC module/accel/dsa/accel_dsa_rpc.o 00:03:51.607 CC module/keyring/file/keyring_rpc.o 00:03:51.607 CC module/accel/ioat/accel_ioat.o 00:03:51.607 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:51.607 CC module/keyring/linux/keyring.o 00:03:51.607 CC module/accel/ioat/accel_ioat_rpc.o 00:03:51.607 CC module/keyring/linux/keyring_rpc.o 00:03:51.607 CC module/fsdev/aio/fsdev_aio.o 00:03:51.607 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:51.607 CC module/accel/error/accel_error.o 00:03:51.607 CC module/fsdev/aio/linux_aio_mgr.o 00:03:51.607 CC module/blob/bdev/blob_bdev.o 00:03:51.607 CC module/accel/error/accel_error_rpc.o 00:03:51.607 CC module/accel/iaa/accel_iaa.o 00:03:51.607 CC module/accel/iaa/accel_iaa_rpc.o 00:03:51.607 CC module/scheduler/gscheduler/gscheduler.o 00:03:51.607 LIB libspdk_env_dpdk_rpc.a 00:03:51.607 SO libspdk_env_dpdk_rpc.so.6.0 00:03:51.865 SYMLINK libspdk_env_dpdk_rpc.so 00:03:51.865 LIB libspdk_keyring_file.a 00:03:51.865 LIB libspdk_keyring_linux.a 00:03:51.865 LIB libspdk_scheduler_gscheduler.a 00:03:51.865 LIB libspdk_scheduler_dpdk_governor.a 00:03:51.865 SO libspdk_keyring_file.so.2.0 00:03:51.865 SO libspdk_keyring_linux.so.1.0 00:03:51.865 SO libspdk_scheduler_gscheduler.so.4.0 00:03:51.865 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:51.865 LIB libspdk_accel_ioat.a 00:03:51.865 LIB libspdk_accel_iaa.a 00:03:51.865 SYMLINK libspdk_keyring_file.so 00:03:51.865 SYMLINK libspdk_keyring_linux.so 00:03:51.865 LIB libspdk_accel_error.a 00:03:51.865 SO libspdk_accel_ioat.so.6.0 00:03:51.865 SYMLINK libspdk_scheduler_gscheduler.so 00:03:51.865 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:51.866 SO libspdk_accel_iaa.so.3.0 00:03:51.866 SO libspdk_accel_error.so.2.0 00:03:51.866 SYMLINK libspdk_accel_ioat.so 00:03:51.866 LIB libspdk_scheduler_dynamic.a 00:03:51.866 SYMLINK libspdk_accel_iaa.so 00:03:51.866 LIB libspdk_accel_dsa.a 00:03:51.866 SYMLINK libspdk_accel_error.so 00:03:51.866 SO libspdk_scheduler_dynamic.so.4.0 00:03:51.866 SO libspdk_accel_dsa.so.5.0 00:03:51.866 SYMLINK libspdk_scheduler_dynamic.so 00:03:52.125 SYMLINK libspdk_accel_dsa.so 00:03:52.125 LIB libspdk_blob_bdev.a 00:03:52.125 SO libspdk_blob_bdev.so.11.0 00:03:52.125 SYMLINK libspdk_blob_bdev.so 00:03:52.125 LIB libspdk_vfu_device.a 00:03:52.125 SO libspdk_vfu_device.so.3.0 00:03:52.385 SYMLINK libspdk_vfu_device.so 00:03:52.385 CC module/bdev/delay/vbdev_delay.o 00:03:52.385 CC module/bdev/passthru/vbdev_passthru.o 00:03:52.385 CC module/bdev/split/vbdev_split.o 00:03:52.385 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:52.385 CC module/blobfs/bdev/blobfs_bdev.o 00:03:52.385 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:52.385 CC module/bdev/nvme/bdev_nvme.o 00:03:52.385 CC module/bdev/error/vbdev_error.o 00:03:52.385 CC module/bdev/split/vbdev_split_rpc.o 00:03:52.385 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:52.385 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:52.385 CC module/bdev/malloc/bdev_malloc.o 00:03:52.385 CC module/bdev/nvme/nvme_rpc.o 00:03:52.385 CC module/bdev/gpt/gpt.o 00:03:52.385 CC module/bdev/nvme/bdev_mdns_client.o 00:03:52.385 CC module/bdev/error/vbdev_error_rpc.o 00:03:52.385 CC module/bdev/ftl/bdev_ftl.o 00:03:52.385 CC module/bdev/gpt/vbdev_gpt.o 00:03:52.385 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:52.385 CC module/bdev/null/bdev_null.o 00:03:52.385 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:52.385 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:52.385 CC module/bdev/aio/bdev_aio.o 00:03:52.385 CC module/bdev/nvme/vbdev_opal.o 00:03:52.385 CC module/bdev/lvol/vbdev_lvol.o 00:03:52.385 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:52.385 CC module/bdev/null/bdev_null_rpc.o 00:03:52.385 CC module/bdev/aio/bdev_aio_rpc.o 00:03:52.385 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:52.385 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:52.385 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:52.385 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:52.385 CC module/bdev/raid/bdev_raid.o 00:03:52.385 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:52.385 CC module/bdev/raid/bdev_raid_rpc.o 00:03:52.385 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:52.385 CC module/bdev/raid/bdev_raid_sb.o 00:03:52.385 CC module/bdev/raid/raid0.o 00:03:52.385 CC module/bdev/raid/concat.o 00:03:52.385 CC module/bdev/raid/raid1.o 00:03:52.385 CC module/bdev/iscsi/bdev_iscsi.o 00:03:52.385 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:52.385 LIB libspdk_sock_posix.a 00:03:52.385 LIB libspdk_fsdev_aio.a 00:03:52.385 SO libspdk_sock_posix.so.6.0 00:03:52.643 SO libspdk_fsdev_aio.so.1.0 00:03:52.643 SYMLINK libspdk_sock_posix.so 00:03:52.643 SYMLINK libspdk_fsdev_aio.so 00:03:52.643 LIB libspdk_blobfs_bdev.a 00:03:52.643 SO libspdk_blobfs_bdev.so.6.0 00:03:52.901 LIB libspdk_bdev_error.a 00:03:52.901 LIB libspdk_bdev_split.a 00:03:52.901 SYMLINK libspdk_blobfs_bdev.so 00:03:52.901 LIB libspdk_bdev_iscsi.a 00:03:52.901 SO libspdk_bdev_error.so.6.0 00:03:52.901 SO libspdk_bdev_split.so.6.0 00:03:52.901 SO libspdk_bdev_iscsi.so.6.0 00:03:52.901 LIB libspdk_bdev_ftl.a 00:03:52.901 LIB libspdk_bdev_null.a 00:03:52.901 LIB libspdk_bdev_gpt.a 00:03:52.901 SYMLINK libspdk_bdev_error.so 00:03:52.901 SO libspdk_bdev_null.so.6.0 00:03:52.901 SO libspdk_bdev_ftl.so.6.0 00:03:52.901 SO libspdk_bdev_gpt.so.6.0 00:03:52.901 SYMLINK libspdk_bdev_split.so 00:03:52.901 SYMLINK libspdk_bdev_iscsi.so 00:03:52.901 LIB libspdk_bdev_passthru.a 00:03:52.901 LIB libspdk_bdev_aio.a 00:03:52.901 LIB libspdk_bdev_zone_block.a 00:03:52.901 SYMLINK libspdk_bdev_null.so 00:03:52.901 SYMLINK libspdk_bdev_gpt.so 00:03:52.901 SYMLINK libspdk_bdev_ftl.so 00:03:52.901 SO libspdk_bdev_passthru.so.6.0 00:03:52.901 SO libspdk_bdev_aio.so.6.0 00:03:52.901 SO libspdk_bdev_zone_block.so.6.0 00:03:52.901 LIB libspdk_bdev_malloc.a 00:03:52.901 LIB libspdk_bdev_delay.a 00:03:52.901 SO libspdk_bdev_malloc.so.6.0 00:03:52.901 SO libspdk_bdev_delay.so.6.0 00:03:52.901 SYMLINK libspdk_bdev_passthru.so 00:03:52.901 SYMLINK libspdk_bdev_aio.so 00:03:52.901 SYMLINK libspdk_bdev_zone_block.so 00:03:53.160 SYMLINK libspdk_bdev_malloc.so 00:03:53.160 SYMLINK libspdk_bdev_delay.so 00:03:53.160 LIB libspdk_bdev_lvol.a 00:03:53.160 SO libspdk_bdev_lvol.so.6.0 00:03:53.160 LIB libspdk_bdev_virtio.a 00:03:53.160 SO libspdk_bdev_virtio.so.6.0 00:03:53.160 SYMLINK libspdk_bdev_lvol.so 00:03:53.160 SYMLINK libspdk_bdev_virtio.so 00:03:53.728 LIB libspdk_bdev_raid.a 00:03:53.728 SO libspdk_bdev_raid.so.6.0 00:03:53.728 SYMLINK libspdk_bdev_raid.so 00:03:55.103 LIB libspdk_bdev_nvme.a 00:03:55.103 SO libspdk_bdev_nvme.so.7.1 00:03:55.103 SYMLINK libspdk_bdev_nvme.so 00:03:55.670 CC module/event/subsystems/sock/sock.o 00:03:55.670 CC module/event/subsystems/vmd/vmd.o 00:03:55.670 CC module/event/subsystems/iobuf/iobuf.o 00:03:55.670 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:55.670 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:55.670 CC module/event/subsystems/keyring/keyring.o 00:03:55.670 CC module/event/subsystems/scheduler/scheduler.o 00:03:55.670 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:55.670 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:55.670 CC module/event/subsystems/fsdev/fsdev.o 00:03:55.670 LIB libspdk_event_keyring.a 00:03:55.670 LIB libspdk_event_vhost_blk.a 00:03:55.670 LIB libspdk_event_fsdev.a 00:03:55.670 LIB libspdk_event_scheduler.a 00:03:55.670 LIB libspdk_event_vfu_tgt.a 00:03:55.670 LIB libspdk_event_vmd.a 00:03:55.670 LIB libspdk_event_sock.a 00:03:55.670 SO libspdk_event_keyring.so.1.0 00:03:55.670 SO libspdk_event_vhost_blk.so.3.0 00:03:55.670 SO libspdk_event_fsdev.so.1.0 00:03:55.670 SO libspdk_event_scheduler.so.4.0 00:03:55.670 LIB libspdk_event_iobuf.a 00:03:55.670 SO libspdk_event_vfu_tgt.so.3.0 00:03:55.670 SO libspdk_event_sock.so.5.0 00:03:55.670 SO libspdk_event_vmd.so.6.0 00:03:55.670 SO libspdk_event_iobuf.so.3.0 00:03:55.670 SYMLINK libspdk_event_keyring.so 00:03:55.929 SYMLINK libspdk_event_vhost_blk.so 00:03:55.929 SYMLINK libspdk_event_fsdev.so 00:03:55.929 SYMLINK libspdk_event_scheduler.so 00:03:55.929 SYMLINK libspdk_event_vfu_tgt.so 00:03:55.929 SYMLINK libspdk_event_sock.so 00:03:55.929 SYMLINK libspdk_event_vmd.so 00:03:55.929 SYMLINK libspdk_event_iobuf.so 00:03:55.929 CC module/event/subsystems/accel/accel.o 00:03:56.187 LIB libspdk_event_accel.a 00:03:56.187 SO libspdk_event_accel.so.6.0 00:03:56.187 SYMLINK libspdk_event_accel.so 00:03:56.447 CC module/event/subsystems/bdev/bdev.o 00:03:56.705 LIB libspdk_event_bdev.a 00:03:56.705 SO libspdk_event_bdev.so.6.0 00:03:56.705 SYMLINK libspdk_event_bdev.so 00:03:56.705 CC module/event/subsystems/nbd/nbd.o 00:03:56.705 CC module/event/subsystems/scsi/scsi.o 00:03:56.964 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:56.964 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:56.964 CC module/event/subsystems/ublk/ublk.o 00:03:56.964 LIB libspdk_event_nbd.a 00:03:56.964 LIB libspdk_event_ublk.a 00:03:56.964 LIB libspdk_event_scsi.a 00:03:56.964 SO libspdk_event_nbd.so.6.0 00:03:56.964 SO libspdk_event_ublk.so.3.0 00:03:56.964 SO libspdk_event_scsi.so.6.0 00:03:56.964 SYMLINK libspdk_event_nbd.so 00:03:56.964 SYMLINK libspdk_event_ublk.so 00:03:56.964 SYMLINK libspdk_event_scsi.so 00:03:56.964 LIB libspdk_event_nvmf.a 00:03:57.223 SO libspdk_event_nvmf.so.6.0 00:03:57.223 SYMLINK libspdk_event_nvmf.so 00:03:57.223 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:57.223 CC module/event/subsystems/iscsi/iscsi.o 00:03:57.482 LIB libspdk_event_vhost_scsi.a 00:03:57.482 SO libspdk_event_vhost_scsi.so.3.0 00:03:57.482 LIB libspdk_event_iscsi.a 00:03:57.482 SO libspdk_event_iscsi.so.6.0 00:03:57.482 SYMLINK libspdk_event_vhost_scsi.so 00:03:57.482 SYMLINK libspdk_event_iscsi.so 00:03:57.482 SO libspdk.so.6.0 00:03:57.482 SYMLINK libspdk.so 00:03:57.749 CXX app/trace/trace.o 00:03:57.749 CC app/trace_record/trace_record.o 00:03:57.749 CC app/spdk_lspci/spdk_lspci.o 00:03:57.749 CC app/spdk_top/spdk_top.o 00:03:57.749 CC app/spdk_nvme_discover/discovery_aer.o 00:03:57.749 CC app/spdk_nvme_perf/perf.o 00:03:57.749 CC app/spdk_nvme_identify/identify.o 00:03:57.749 CC test/rpc_client/rpc_client_test.o 00:03:57.749 TEST_HEADER include/spdk/accel.h 00:03:57.749 TEST_HEADER include/spdk/accel_module.h 00:03:57.749 TEST_HEADER include/spdk/assert.h 00:03:57.749 TEST_HEADER include/spdk/base64.h 00:03:57.749 TEST_HEADER include/spdk/barrier.h 00:03:57.749 TEST_HEADER include/spdk/bdev.h 00:03:57.749 TEST_HEADER include/spdk/bdev_module.h 00:03:57.749 TEST_HEADER include/spdk/bdev_zone.h 00:03:57.749 TEST_HEADER include/spdk/bit_array.h 00:03:57.749 TEST_HEADER include/spdk/bit_pool.h 00:03:57.749 TEST_HEADER include/spdk/blob_bdev.h 00:03:57.749 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:57.749 TEST_HEADER include/spdk/blobfs.h 00:03:57.749 TEST_HEADER include/spdk/blob.h 00:03:57.749 TEST_HEADER include/spdk/conf.h 00:03:57.749 TEST_HEADER include/spdk/config.h 00:03:57.749 TEST_HEADER include/spdk/cpuset.h 00:03:57.749 TEST_HEADER include/spdk/crc16.h 00:03:57.749 TEST_HEADER include/spdk/crc64.h 00:03:57.749 TEST_HEADER include/spdk/crc32.h 00:03:57.749 TEST_HEADER include/spdk/dif.h 00:03:57.749 TEST_HEADER include/spdk/dma.h 00:03:57.749 TEST_HEADER include/spdk/endian.h 00:03:57.749 TEST_HEADER include/spdk/env_dpdk.h 00:03:57.749 TEST_HEADER include/spdk/env.h 00:03:57.749 TEST_HEADER include/spdk/event.h 00:03:57.749 TEST_HEADER include/spdk/fd_group.h 00:03:57.749 TEST_HEADER include/spdk/fd.h 00:03:57.749 TEST_HEADER include/spdk/file.h 00:03:57.749 TEST_HEADER include/spdk/fsdev.h 00:03:57.749 TEST_HEADER include/spdk/fsdev_module.h 00:03:57.749 TEST_HEADER include/spdk/ftl.h 00:03:57.749 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:57.749 TEST_HEADER include/spdk/gpt_spec.h 00:03:57.749 TEST_HEADER include/spdk/hexlify.h 00:03:57.749 TEST_HEADER include/spdk/histogram_data.h 00:03:57.749 TEST_HEADER include/spdk/idxd_spec.h 00:03:57.749 TEST_HEADER include/spdk/idxd.h 00:03:57.749 TEST_HEADER include/spdk/init.h 00:03:57.749 TEST_HEADER include/spdk/ioat.h 00:03:57.749 TEST_HEADER include/spdk/ioat_spec.h 00:03:57.749 TEST_HEADER include/spdk/iscsi_spec.h 00:03:57.749 TEST_HEADER include/spdk/json.h 00:03:57.749 TEST_HEADER include/spdk/jsonrpc.h 00:03:57.749 TEST_HEADER include/spdk/keyring.h 00:03:57.749 TEST_HEADER include/spdk/keyring_module.h 00:03:57.749 TEST_HEADER include/spdk/likely.h 00:03:57.749 TEST_HEADER include/spdk/log.h 00:03:57.749 TEST_HEADER include/spdk/lvol.h 00:03:57.749 TEST_HEADER include/spdk/md5.h 00:03:57.749 TEST_HEADER include/spdk/memory.h 00:03:57.749 TEST_HEADER include/spdk/nbd.h 00:03:57.749 TEST_HEADER include/spdk/mmio.h 00:03:57.749 TEST_HEADER include/spdk/net.h 00:03:57.749 TEST_HEADER include/spdk/notify.h 00:03:57.749 TEST_HEADER include/spdk/nvme.h 00:03:57.749 TEST_HEADER include/spdk/nvme_intel.h 00:03:57.749 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:57.749 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:57.749 TEST_HEADER include/spdk/nvme_spec.h 00:03:57.749 TEST_HEADER include/spdk/nvme_zns.h 00:03:57.749 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:57.749 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:57.749 TEST_HEADER include/spdk/nvmf.h 00:03:57.749 TEST_HEADER include/spdk/nvmf_spec.h 00:03:57.749 TEST_HEADER include/spdk/nvmf_transport.h 00:03:57.749 TEST_HEADER include/spdk/opal.h 00:03:57.749 TEST_HEADER include/spdk/opal_spec.h 00:03:57.749 TEST_HEADER include/spdk/pci_ids.h 00:03:57.749 TEST_HEADER include/spdk/pipe.h 00:03:57.749 TEST_HEADER include/spdk/queue.h 00:03:57.749 TEST_HEADER include/spdk/reduce.h 00:03:57.749 TEST_HEADER include/spdk/rpc.h 00:03:57.749 TEST_HEADER include/spdk/scheduler.h 00:03:57.749 TEST_HEADER include/spdk/scsi.h 00:03:57.749 TEST_HEADER include/spdk/scsi_spec.h 00:03:57.749 TEST_HEADER include/spdk/sock.h 00:03:57.749 TEST_HEADER include/spdk/stdinc.h 00:03:57.749 TEST_HEADER include/spdk/string.h 00:03:57.749 TEST_HEADER include/spdk/thread.h 00:03:57.749 TEST_HEADER include/spdk/trace.h 00:03:57.749 TEST_HEADER include/spdk/tree.h 00:03:57.749 TEST_HEADER include/spdk/trace_parser.h 00:03:57.749 TEST_HEADER include/spdk/ublk.h 00:03:57.749 TEST_HEADER include/spdk/util.h 00:03:57.749 TEST_HEADER include/spdk/uuid.h 00:03:57.749 TEST_HEADER include/spdk/version.h 00:03:57.749 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:57.749 TEST_HEADER include/spdk/vhost.h 00:03:57.749 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:57.749 TEST_HEADER include/spdk/vmd.h 00:03:57.749 TEST_HEADER include/spdk/xor.h 00:03:57.749 TEST_HEADER include/spdk/zipf.h 00:03:57.749 CXX test/cpp_headers/accel.o 00:03:57.749 CXX test/cpp_headers/accel_module.o 00:03:57.749 CXX test/cpp_headers/assert.o 00:03:57.749 CXX test/cpp_headers/barrier.o 00:03:57.749 CXX test/cpp_headers/base64.o 00:03:57.749 CXX test/cpp_headers/bdev_module.o 00:03:57.749 CXX test/cpp_headers/bdev.o 00:03:57.749 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:57.749 CXX test/cpp_headers/bdev_zone.o 00:03:57.749 CXX test/cpp_headers/bit_array.o 00:03:57.749 CXX test/cpp_headers/bit_pool.o 00:03:57.749 CXX test/cpp_headers/blob_bdev.o 00:03:57.749 CXX test/cpp_headers/blobfs_bdev.o 00:03:57.749 CXX test/cpp_headers/blobfs.o 00:03:57.749 CXX test/cpp_headers/blob.o 00:03:57.749 CXX test/cpp_headers/conf.o 00:03:57.749 CXX test/cpp_headers/config.o 00:03:57.749 CXX test/cpp_headers/cpuset.o 00:03:57.749 CXX test/cpp_headers/crc16.o 00:03:57.749 CC app/nvmf_tgt/nvmf_main.o 00:03:57.749 CC app/spdk_dd/spdk_dd.o 00:03:58.008 CC app/iscsi_tgt/iscsi_tgt.o 00:03:58.008 CXX test/cpp_headers/crc32.o 00:03:58.008 CC examples/util/zipf/zipf.o 00:03:58.008 CC examples/ioat/verify/verify.o 00:03:58.008 CC examples/ioat/perf/perf.o 00:03:58.008 CC test/app/stub/stub.o 00:03:58.008 CC test/app/histogram_perf/histogram_perf.o 00:03:58.008 CC app/spdk_tgt/spdk_tgt.o 00:03:58.008 CC test/env/memory/memory_ut.o 00:03:58.008 CC test/thread/poller_perf/poller_perf.o 00:03:58.008 CC test/env/pci/pci_ut.o 00:03:58.008 CC test/app/jsoncat/jsoncat.o 00:03:58.008 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:58.008 CC test/env/vtophys/vtophys.o 00:03:58.008 CC app/fio/nvme/fio_plugin.o 00:03:58.008 CC test/dma/test_dma/test_dma.o 00:03:58.008 CC app/fio/bdev/fio_plugin.o 00:03:58.008 CC test/app/bdev_svc/bdev_svc.o 00:03:58.008 LINK spdk_lspci 00:03:58.008 CC test/env/mem_callbacks/mem_callbacks.o 00:03:58.008 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:58.271 LINK rpc_client_test 00:03:58.271 LINK spdk_nvme_discover 00:03:58.271 LINK histogram_perf 00:03:58.271 LINK zipf 00:03:58.271 LINK poller_perf 00:03:58.271 LINK jsoncat 00:03:58.271 LINK spdk_trace_record 00:03:58.271 CXX test/cpp_headers/crc64.o 00:03:58.271 CXX test/cpp_headers/dif.o 00:03:58.271 CXX test/cpp_headers/dma.o 00:03:58.271 CXX test/cpp_headers/endian.o 00:03:58.271 LINK vtophys 00:03:58.271 LINK interrupt_tgt 00:03:58.271 LINK env_dpdk_post_init 00:03:58.271 CXX test/cpp_headers/env_dpdk.o 00:03:58.271 LINK nvmf_tgt 00:03:58.271 CXX test/cpp_headers/env.o 00:03:58.271 CXX test/cpp_headers/event.o 00:03:58.271 CXX test/cpp_headers/fd_group.o 00:03:58.271 CXX test/cpp_headers/fd.o 00:03:58.271 CXX test/cpp_headers/file.o 00:03:58.271 CXX test/cpp_headers/fsdev.o 00:03:58.271 CXX test/cpp_headers/fsdev_module.o 00:03:58.271 LINK stub 00:03:58.271 LINK ioat_perf 00:03:58.271 LINK iscsi_tgt 00:03:58.533 CXX test/cpp_headers/ftl.o 00:03:58.533 CXX test/cpp_headers/fuse_dispatcher.o 00:03:58.533 LINK verify 00:03:58.533 CXX test/cpp_headers/gpt_spec.o 00:03:58.533 CXX test/cpp_headers/hexlify.o 00:03:58.533 LINK bdev_svc 00:03:58.533 CXX test/cpp_headers/histogram_data.o 00:03:58.533 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:58.533 LINK spdk_tgt 00:03:58.533 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:58.533 CXX test/cpp_headers/idxd.o 00:03:58.533 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:58.533 CXX test/cpp_headers/idxd_spec.o 00:03:58.533 LINK spdk_trace 00:03:58.797 CXX test/cpp_headers/init.o 00:03:58.797 CXX test/cpp_headers/ioat.o 00:03:58.797 LINK spdk_dd 00:03:58.797 CXX test/cpp_headers/ioat_spec.o 00:03:58.797 CXX test/cpp_headers/iscsi_spec.o 00:03:58.797 CXX test/cpp_headers/json.o 00:03:58.797 CXX test/cpp_headers/jsonrpc.o 00:03:58.797 CXX test/cpp_headers/keyring.o 00:03:58.797 CXX test/cpp_headers/keyring_module.o 00:03:58.797 CXX test/cpp_headers/likely.o 00:03:58.797 CXX test/cpp_headers/log.o 00:03:58.797 CXX test/cpp_headers/lvol.o 00:03:58.797 CXX test/cpp_headers/md5.o 00:03:58.797 CXX test/cpp_headers/memory.o 00:03:58.797 CXX test/cpp_headers/mmio.o 00:03:58.797 LINK pci_ut 00:03:58.797 CXX test/cpp_headers/nbd.o 00:03:58.797 CXX test/cpp_headers/net.o 00:03:58.797 CXX test/cpp_headers/notify.o 00:03:58.797 CXX test/cpp_headers/nvme.o 00:03:58.797 CXX test/cpp_headers/nvme_intel.o 00:03:58.797 CXX test/cpp_headers/nvme_ocssd.o 00:03:58.797 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:58.797 CXX test/cpp_headers/nvme_spec.o 00:03:58.797 CC examples/sock/hello_world/hello_sock.o 00:03:58.797 CC test/event/event_perf/event_perf.o 00:03:58.797 CXX test/cpp_headers/nvme_zns.o 00:03:58.797 CC examples/vmd/lsvmd/lsvmd.o 00:03:58.797 CC examples/vmd/led/led.o 00:03:58.797 CXX test/cpp_headers/nvmf_cmd.o 00:03:59.059 CC examples/thread/thread/thread_ex.o 00:03:59.059 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:59.059 CC test/event/reactor/reactor.o 00:03:59.059 CXX test/cpp_headers/nvmf.o 00:03:59.059 CC test/event/reactor_perf/reactor_perf.o 00:03:59.059 CC test/event/app_repeat/app_repeat.o 00:03:59.059 CC examples/idxd/perf/perf.o 00:03:59.059 LINK spdk_bdev 00:03:59.059 LINK nvme_fuzz 00:03:59.059 LINK test_dma 00:03:59.059 CXX test/cpp_headers/nvmf_spec.o 00:03:59.059 CXX test/cpp_headers/nvmf_transport.o 00:03:59.059 LINK spdk_nvme 00:03:59.059 CC test/event/scheduler/scheduler.o 00:03:59.059 CXX test/cpp_headers/opal.o 00:03:59.059 CXX test/cpp_headers/opal_spec.o 00:03:59.059 CXX test/cpp_headers/pci_ids.o 00:03:59.059 CXX test/cpp_headers/pipe.o 00:03:59.059 CXX test/cpp_headers/queue.o 00:03:59.059 CXX test/cpp_headers/reduce.o 00:03:59.059 CXX test/cpp_headers/rpc.o 00:03:59.059 CXX test/cpp_headers/scheduler.o 00:03:59.059 CXX test/cpp_headers/scsi.o 00:03:59.059 CXX test/cpp_headers/scsi_spec.o 00:03:59.319 CXX test/cpp_headers/sock.o 00:03:59.319 CXX test/cpp_headers/stdinc.o 00:03:59.319 CXX test/cpp_headers/string.o 00:03:59.319 LINK lsvmd 00:03:59.319 CXX test/cpp_headers/thread.o 00:03:59.319 CXX test/cpp_headers/trace.o 00:03:59.319 LINK event_perf 00:03:59.319 CXX test/cpp_headers/trace_parser.o 00:03:59.319 CXX test/cpp_headers/tree.o 00:03:59.319 LINK led 00:03:59.319 CXX test/cpp_headers/ublk.o 00:03:59.319 LINK reactor 00:03:59.319 LINK reactor_perf 00:03:59.319 CXX test/cpp_headers/util.o 00:03:59.319 CXX test/cpp_headers/uuid.o 00:03:59.319 CXX test/cpp_headers/version.o 00:03:59.319 CXX test/cpp_headers/vfio_user_pci.o 00:03:59.319 CXX test/cpp_headers/vfio_user_spec.o 00:03:59.319 CXX test/cpp_headers/vhost.o 00:03:59.319 LINK app_repeat 00:03:59.319 LINK vhost_fuzz 00:03:59.319 CXX test/cpp_headers/vmd.o 00:03:59.319 CXX test/cpp_headers/xor.o 00:03:59.319 LINK spdk_nvme_perf 00:03:59.319 CXX test/cpp_headers/zipf.o 00:03:59.319 LINK mem_callbacks 00:03:59.319 CC app/vhost/vhost.o 00:03:59.319 LINK spdk_nvme_identify 00:03:59.319 LINK hello_sock 00:03:59.578 LINK thread 00:03:59.578 LINK spdk_top 00:03:59.578 LINK scheduler 00:03:59.578 LINK idxd_perf 00:03:59.836 CC test/nvme/boot_partition/boot_partition.o 00:03:59.836 CC test/nvme/e2edp/nvme_dp.o 00:03:59.836 CC test/nvme/aer/aer.o 00:03:59.836 CC test/nvme/overhead/overhead.o 00:03:59.836 CC test/nvme/reserve/reserve.o 00:03:59.836 CC test/nvme/sgl/sgl.o 00:03:59.836 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:59.836 CC test/nvme/cuse/cuse.o 00:03:59.836 CC test/nvme/simple_copy/simple_copy.o 00:03:59.836 CC test/nvme/startup/startup.o 00:03:59.836 CC test/nvme/reset/reset.o 00:03:59.836 CC test/nvme/fused_ordering/fused_ordering.o 00:03:59.836 CC test/nvme/fdp/fdp.o 00:03:59.836 CC test/nvme/err_injection/err_injection.o 00:03:59.836 CC test/nvme/compliance/nvme_compliance.o 00:03:59.836 CC test/nvme/connect_stress/connect_stress.o 00:03:59.836 LINK vhost 00:03:59.836 CC test/blobfs/mkfs/mkfs.o 00:03:59.836 CC test/accel/dif/dif.o 00:03:59.836 CC test/lvol/esnap/esnap.o 00:03:59.836 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.836 CC examples/nvme/hotplug/hotplug.o 00:03:59.836 CC examples/nvme/abort/abort.o 00:03:59.836 CC examples/nvme/arbitration/arbitration.o 00:03:59.836 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:59.836 CC examples/nvme/reconnect/reconnect.o 00:03:59.836 CC examples/nvme/hello_world/hello_world.o 00:03:59.836 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:00.094 LINK boot_partition 00:04:00.094 LINK startup 00:04:00.094 LINK connect_stress 00:04:00.094 CC examples/accel/perf/accel_perf.o 00:04:00.094 LINK fused_ordering 00:04:00.094 LINK err_injection 00:04:00.094 LINK reserve 00:04:00.094 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:00.094 LINK mkfs 00:04:00.094 CC examples/blob/cli/blobcli.o 00:04:00.094 LINK simple_copy 00:04:00.094 LINK reset 00:04:00.094 LINK nvme_dp 00:04:00.094 CC examples/blob/hello_world/hello_blob.o 00:04:00.094 LINK doorbell_aers 00:04:00.094 LINK overhead 00:04:00.094 LINK fdp 00:04:00.094 LINK memory_ut 00:04:00.353 LINK pmr_persistence 00:04:00.353 LINK sgl 00:04:00.353 LINK aer 00:04:00.353 LINK nvme_compliance 00:04:00.353 LINK cmb_copy 00:04:00.353 LINK arbitration 00:04:00.353 LINK hello_world 00:04:00.353 LINK hotplug 00:04:00.353 LINK hello_fsdev 00:04:00.611 LINK reconnect 00:04:00.611 LINK nvme_manage 00:04:00.611 LINK hello_blob 00:04:00.611 LINK abort 00:04:00.611 LINK accel_perf 00:04:00.611 LINK blobcli 00:04:00.611 LINK dif 00:04:00.869 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.869 CC examples/bdev/bdevperf/bdevperf.o 00:04:01.128 CC test/bdev/bdevio/bdevio.o 00:04:01.128 LINK iscsi_fuzz 00:04:01.128 LINK hello_bdev 00:04:01.386 LINK cuse 00:04:01.386 LINK bdevio 00:04:01.952 LINK bdevperf 00:04:02.210 CC examples/nvmf/nvmf/nvmf.o 00:04:02.470 LINK nvmf 00:04:05.005 LINK esnap 00:04:05.263 00:04:05.263 real 1m9.799s 00:04:05.263 user 11m53.200s 00:04:05.263 sys 2m38.652s 00:04:05.263 09:37:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:05.263 09:37:42 make -- common/autotest_common.sh@10 -- $ set +x 00:04:05.263 ************************************ 00:04:05.263 END TEST make 00:04:05.263 ************************************ 00:04:05.525 09:37:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:05.525 09:37:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:05.525 09:37:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:05.525 09:37:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.525 09:37:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:05.525 09:37:42 -- pm/common@44 -- $ pid=3536859 00:04:05.525 09:37:42 -- pm/common@50 -- $ kill -TERM 3536859 00:04:05.525 09:37:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.525 09:37:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:05.525 09:37:42 -- pm/common@44 -- $ pid=3536861 00:04:05.525 09:37:42 -- pm/common@50 -- $ kill -TERM 3536861 00:04:05.525 09:37:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.525 09:37:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:05.525 09:37:42 -- pm/common@44 -- $ pid=3536863 00:04:05.525 09:37:42 -- pm/common@50 -- $ kill -TERM 3536863 00:04:05.525 09:37:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.525 09:37:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:05.525 09:37:42 -- pm/common@44 -- $ pid=3536894 00:04:05.525 09:37:42 -- pm/common@50 -- $ sudo -E kill -TERM 3536894 00:04:05.525 09:37:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:05.525 09:37:42 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:05.525 09:37:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.525 09:37:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.525 09:37:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.525 09:37:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.525 09:37:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.525 09:37:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.525 09:37:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.525 09:37:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.525 09:37:42 -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.525 09:37:42 -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.525 09:37:42 -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.525 09:37:42 -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.525 09:37:42 -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.525 09:37:42 -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.525 09:37:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.525 09:37:42 -- scripts/common.sh@344 -- # case "$op" in 00:04:05.525 09:37:42 -- scripts/common.sh@345 -- # : 1 00:04:05.525 09:37:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.525 09:37:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.525 09:37:42 -- scripts/common.sh@365 -- # decimal 1 00:04:05.525 09:37:42 -- scripts/common.sh@353 -- # local d=1 00:04:05.525 09:37:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.525 09:37:42 -- scripts/common.sh@355 -- # echo 1 00:04:05.525 09:37:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.525 09:37:42 -- scripts/common.sh@366 -- # decimal 2 00:04:05.525 09:37:42 -- scripts/common.sh@353 -- # local d=2 00:04:05.525 09:37:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.525 09:37:42 -- scripts/common.sh@355 -- # echo 2 00:04:05.525 09:37:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.525 09:37:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.525 09:37:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.525 09:37:42 -- scripts/common.sh@368 -- # return 0 00:04:05.525 09:37:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.525 09:37:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.525 --rc genhtml_branch_coverage=1 00:04:05.525 --rc genhtml_function_coverage=1 00:04:05.525 --rc genhtml_legend=1 00:04:05.525 --rc geninfo_all_blocks=1 00:04:05.525 --rc geninfo_unexecuted_blocks=1 00:04:05.525 00:04:05.525 ' 00:04:05.525 09:37:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.525 --rc genhtml_branch_coverage=1 00:04:05.525 --rc genhtml_function_coverage=1 00:04:05.525 --rc genhtml_legend=1 00:04:05.525 --rc geninfo_all_blocks=1 00:04:05.525 --rc geninfo_unexecuted_blocks=1 00:04:05.525 00:04:05.525 ' 00:04:05.525 09:37:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.525 --rc genhtml_branch_coverage=1 00:04:05.525 --rc genhtml_function_coverage=1 00:04:05.525 --rc genhtml_legend=1 00:04:05.525 --rc geninfo_all_blocks=1 00:04:05.525 --rc geninfo_unexecuted_blocks=1 00:04:05.525 00:04:05.525 ' 00:04:05.525 09:37:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.525 --rc genhtml_branch_coverage=1 00:04:05.525 --rc genhtml_function_coverage=1 00:04:05.525 --rc genhtml_legend=1 00:04:05.525 --rc geninfo_all_blocks=1 00:04:05.525 --rc geninfo_unexecuted_blocks=1 00:04:05.525 00:04:05.525 ' 00:04:05.525 09:37:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.525 09:37:42 -- nvmf/common.sh@7 -- # uname -s 00:04:05.525 09:37:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.525 09:37:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.525 09:37:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.525 09:37:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.525 09:37:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.525 09:37:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.525 09:37:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.525 09:37:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.525 09:37:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.525 09:37:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.525 09:37:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:05.525 09:37:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:05.525 09:37:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.525 09:37:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.525 09:37:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:05.525 09:37:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.525 09:37:42 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.525 09:37:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.525 09:37:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.525 09:37:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.525 09:37:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.525 09:37:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.525 09:37:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.525 09:37:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.525 09:37:42 -- paths/export.sh@5 -- # export PATH 00:04:05.525 09:37:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.525 09:37:42 -- nvmf/common.sh@51 -- # : 0 00:04:05.525 09:37:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.525 09:37:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.525 09:37:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.525 09:37:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.525 09:37:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.525 09:37:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.525 09:37:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.525 09:37:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.525 09:37:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.525 09:37:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.525 09:37:42 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.525 09:37:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.525 09:37:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:05.525 09:37:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:05.525 09:37:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.525 09:37:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:05.525 09:37:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:05.525 09:37:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:05.525 09:37:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:05.525 09:37:42 -- spdk/autotest.sh@48 -- # udevadm_pid=3596306 00:04:05.525 09:37:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:05.526 09:37:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:05.526 09:37:42 -- pm/common@17 -- # local monitor 00:04:05.526 09:37:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.526 09:37:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.526 09:37:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.526 09:37:42 -- pm/common@21 -- # date +%s 00:04:05.526 09:37:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.526 09:37:42 -- pm/common@21 -- # date +%s 00:04:05.526 09:37:42 -- pm/common@25 -- # sleep 1 00:04:05.526 09:37:42 -- pm/common@21 -- # date +%s 00:04:05.526 09:37:42 -- pm/common@21 -- # date +%s 00:04:05.526 09:37:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091862 00:04:05.526 09:37:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091862 00:04:05.526 09:37:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091862 00:04:05.526 09:37:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091862 00:04:05.526 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091862_collect-cpu-load.pm.log 00:04:05.526 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091862_collect-vmstat.pm.log 00:04:05.526 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091862_collect-cpu-temp.pm.log 00:04:05.526 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091862_collect-bmc-pm.bmc.pm.log 00:04:06.907 09:37:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:06.907 09:37:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:06.907 09:37:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.907 09:37:43 -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 09:37:43 -- spdk/autotest.sh@59 -- # create_test_list 00:04:06.907 09:37:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:06.907 09:37:43 -- common/autotest_common.sh@10 -- # set +x 00:04:06.907 09:37:43 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:06.907 09:37:43 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.907 09:37:43 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.907 09:37:43 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:06.907 09:37:43 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.907 09:37:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:06.907 09:37:43 -- common/autotest_common.sh@1457 -- # uname 00:04:06.907 09:37:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:06.907 09:37:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:06.907 09:37:43 -- common/autotest_common.sh@1477 -- # uname 00:04:06.907 09:37:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:06.907 09:37:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:06.907 09:37:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:06.907 lcov: LCOV version 1.15 00:04:06.907 09:37:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:25.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:25.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:46.980 09:38:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:46.980 09:38:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.980 09:38:21 -- common/autotest_common.sh@10 -- # set +x 00:04:46.980 09:38:21 -- spdk/autotest.sh@78 -- # rm -f 00:04:46.981 09:38:21 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.981 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:46.981 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:46.981 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:46.981 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:46.981 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:46.981 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:46.981 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:46.981 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:46.981 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:46.981 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:46.981 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:46.981 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:46.981 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:46.981 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:46.981 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:46.981 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:46.981 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:46.981 09:38:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:46.981 09:38:22 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:46.981 09:38:22 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:46.981 09:38:22 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:46.981 09:38:22 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.981 09:38:22 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:46.981 09:38:22 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:46.981 09:38:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.981 09:38:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.981 09:38:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:46.981 09:38:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.981 09:38:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.981 09:38:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:46.981 09:38:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:46.981 09:38:22 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:46.981 No valid GPT data, bailing 00:04:46.981 09:38:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.981 09:38:22 -- scripts/common.sh@394 -- # pt= 00:04:46.981 09:38:22 -- scripts/common.sh@395 -- # return 1 00:04:46.981 09:38:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:46.981 1+0 records in 00:04:46.981 1+0 records out 00:04:46.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00229028 s, 458 MB/s 00:04:46.981 09:38:22 -- spdk/autotest.sh@105 -- # sync 00:04:46.981 09:38:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:46.981 09:38:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:46.981 09:38:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:48.357 09:38:24 -- spdk/autotest.sh@111 -- # uname -s 00:04:48.357 09:38:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:48.358 09:38:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:48.358 09:38:24 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:49.296 Hugepages 00:04:49.296 node hugesize free / total 00:04:49.296 node0 1048576kB 0 / 0 00:04:49.296 node0 2048kB 0 / 0 00:04:49.296 node1 1048576kB 0 / 0 00:04:49.296 node1 2048kB 0 / 0 00:04:49.296 00:04:49.296 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.296 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:49.296 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:49.296 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:49.296 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:49.296 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:49.296 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:49.296 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:49.296 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:49.296 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:49.296 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:49.296 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:49.296 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:49.296 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:49.296 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:49.296 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:49.296 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:49.296 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:49.296 09:38:26 -- spdk/autotest.sh@117 -- # uname -s 00:04:49.296 09:38:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:49.296 09:38:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:49.296 09:38:26 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.672 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:50.672 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:50.672 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:50.672 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:50.672 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:50.672 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:50.672 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:50.672 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:50.672 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:50.672 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:50.672 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:50.672 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:50.672 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:50.672 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:50.672 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:50.672 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:51.610 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:51.869 09:38:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:52.811 09:38:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:52.811 09:38:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:52.811 09:38:29 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.811 09:38:29 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:52.811 09:38:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:52.811 09:38:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:52.811 09:38:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.811 09:38:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.811 09:38:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:52.811 09:38:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:52.811 09:38:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:04:52.811 09:38:29 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.187 Waiting for block devices as requested 00:04:54.187 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:54.187 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:54.187 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:54.187 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:54.447 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:54.447 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:54.447 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:54.447 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:54.706 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:54.706 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:54.965 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:54.965 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:54.965 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:54.965 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:55.224 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:55.224 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:55.224 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:55.483 09:38:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:55.483 09:38:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:55.483 09:38:32 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.483 09:38:32 -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme 00:04:55.483 09:38:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:55.483 09:38:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:55.483 09:38:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:55.483 09:38:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:55.483 09:38:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:55.483 09:38:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:55.483 09:38:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:55.483 09:38:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:55.483 09:38:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:55.483 09:38:32 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:55.483 09:38:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:55.483 09:38:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:55.483 09:38:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:55.483 09:38:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:55.483 09:38:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:55.483 09:38:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:55.483 09:38:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:55.483 09:38:32 -- common/autotest_common.sh@1543 -- # continue 00:04:55.483 09:38:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:55.483 09:38:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.483 09:38:32 -- common/autotest_common.sh@10 -- # set +x 00:04:55.483 09:38:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:55.483 09:38:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.483 09:38:32 -- common/autotest_common.sh@10 -- # set +x 00:04:55.483 09:38:32 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.860 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:56.860 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:56.860 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:56.860 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:56.860 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:56.860 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:56.860 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:56.860 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:56.860 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:56.860 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:56.860 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:56.860 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:56.860 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:56.860 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:56.860 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:56.860 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:57.826 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:58.084 09:38:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:58.084 09:38:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.084 09:38:34 -- common/autotest_common.sh@10 -- # set +x 00:04:58.084 09:38:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:58.084 09:38:34 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:58.084 09:38:34 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.084 09:38:34 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:58.085 09:38:34 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:58.085 09:38:34 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:58.085 09:38:34 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:58.085 09:38:34 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:58.085 09:38:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:58.085 09:38:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:58.085 09:38:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.085 09:38:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.085 09:38:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:58.085 09:38:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:58.085 09:38:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:04:58.085 09:38:34 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.085 09:38:34 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:58.085 09:38:34 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:58.085 09:38:34 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:58.085 09:38:34 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:58.085 09:38:34 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:58.085 09:38:34 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0 00:04:58.085 09:38:34 -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]] 00:04:58.085 09:38:34 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3606825 00:04:58.085 09:38:34 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.085 09:38:34 -- common/autotest_common.sh@1585 -- # waitforlisten 3606825 00:04:58.085 09:38:34 -- common/autotest_common.sh@835 -- # '[' -z 3606825 ']' 00:04:58.085 09:38:34 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.085 09:38:34 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.085 09:38:34 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.085 09:38:34 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.085 09:38:34 -- common/autotest_common.sh@10 -- # set +x 00:04:58.085 [2024-11-20 09:38:34.889946] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:04:58.085 [2024-11-20 09:38:34.890030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3606825 ] 00:04:58.085 [2024-11-20 09:38:34.954687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.342 [2024-11-20 09:38:35.014949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.599 09:38:35 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.599 09:38:35 -- common/autotest_common.sh@868 -- # return 0 00:04:58.599 09:38:35 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:58.599 09:38:35 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:58.599 09:38:35 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:05:01.876 nvme0n1 00:05:01.876 09:38:38 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:01.876 [2024-11-20 09:38:38.623321] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:01.876 [2024-11-20 09:38:38.623364] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:01.876 request: 00:05:01.876 { 00:05:01.876 "nvme_ctrlr_name": "nvme0", 00:05:01.876 "password": "test", 00:05:01.876 "method": "bdev_nvme_opal_revert", 00:05:01.876 "req_id": 1 00:05:01.876 } 00:05:01.876 Got JSON-RPC error response 00:05:01.876 response: 00:05:01.876 { 00:05:01.876 "code": -32603, 00:05:01.876 "message": "Internal error" 00:05:01.876 } 00:05:01.876 09:38:38 -- common/autotest_common.sh@1591 -- # true 00:05:01.876 09:38:38 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:01.876 09:38:38 -- common/autotest_common.sh@1595 -- # killprocess 3606825 00:05:01.876 09:38:38 -- common/autotest_common.sh@954 -- # '[' -z 3606825 ']' 00:05:01.876 09:38:38 -- common/autotest_common.sh@958 -- # kill -0 3606825 00:05:01.876 09:38:38 -- common/autotest_common.sh@959 -- # uname 00:05:01.876 09:38:38 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.876 09:38:38 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3606825 00:05:01.876 09:38:38 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.876 09:38:38 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.876 09:38:38 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3606825' 00:05:01.876 killing process with pid 3606825 00:05:01.876 09:38:38 -- common/autotest_common.sh@973 -- # kill 3606825 00:05:01.876 09:38:38 -- common/autotest_common.sh@978 -- # wait 3606825 00:05:03.769 09:38:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:03.769 09:38:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:03.769 09:38:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:03.769 09:38:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:03.769 09:38:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:03.769 09:38:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.769 09:38:40 -- common/autotest_common.sh@10 -- # set +x 00:05:03.769 09:38:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:03.769 09:38:40 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:03.769 09:38:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.769 09:38:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.769 09:38:40 -- common/autotest_common.sh@10 -- # set +x 00:05:03.769 ************************************ 00:05:03.769 START TEST env 00:05:03.769 ************************************ 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:03.769 * Looking for test storage... 00:05:03.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.769 09:38:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.769 09:38:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.769 09:38:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.769 09:38:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.769 09:38:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.769 09:38:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.769 09:38:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.769 09:38:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.769 09:38:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.769 09:38:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.769 09:38:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.769 09:38:40 env -- scripts/common.sh@344 -- # case "$op" in 00:05:03.769 09:38:40 env -- scripts/common.sh@345 -- # : 1 00:05:03.769 09:38:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.769 09:38:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.769 09:38:40 env -- scripts/common.sh@365 -- # decimal 1 00:05:03.769 09:38:40 env -- scripts/common.sh@353 -- # local d=1 00:05:03.769 09:38:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.769 09:38:40 env -- scripts/common.sh@355 -- # echo 1 00:05:03.769 09:38:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.769 09:38:40 env -- scripts/common.sh@366 -- # decimal 2 00:05:03.769 09:38:40 env -- scripts/common.sh@353 -- # local d=2 00:05:03.769 09:38:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.769 09:38:40 env -- scripts/common.sh@355 -- # echo 2 00:05:03.769 09:38:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.769 09:38:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.769 09:38:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.769 09:38:40 env -- scripts/common.sh@368 -- # return 0 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.769 --rc genhtml_branch_coverage=1 00:05:03.769 --rc genhtml_function_coverage=1 00:05:03.769 --rc genhtml_legend=1 00:05:03.769 --rc geninfo_all_blocks=1 00:05:03.769 --rc geninfo_unexecuted_blocks=1 00:05:03.769 00:05:03.769 ' 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.769 --rc genhtml_branch_coverage=1 00:05:03.769 --rc genhtml_function_coverage=1 00:05:03.769 --rc genhtml_legend=1 00:05:03.769 --rc geninfo_all_blocks=1 00:05:03.769 --rc geninfo_unexecuted_blocks=1 00:05:03.769 00:05:03.769 ' 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.769 --rc genhtml_branch_coverage=1 00:05:03.769 --rc genhtml_function_coverage=1 00:05:03.769 --rc genhtml_legend=1 00:05:03.769 --rc geninfo_all_blocks=1 00:05:03.769 --rc geninfo_unexecuted_blocks=1 00:05:03.769 00:05:03.769 ' 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.769 --rc genhtml_branch_coverage=1 00:05:03.769 --rc genhtml_function_coverage=1 00:05:03.769 --rc genhtml_legend=1 00:05:03.769 --rc geninfo_all_blocks=1 00:05:03.769 --rc geninfo_unexecuted_blocks=1 00:05:03.769 00:05:03.769 ' 00:05:03.769 09:38:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.769 09:38:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.769 09:38:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.769 ************************************ 00:05:03.769 START TEST env_memory 00:05:03.769 ************************************ 00:05:03.769 09:38:40 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:03.769 00:05:03.769 00:05:03.769 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.769 http://cunit.sourceforge.net/ 00:05:03.769 00:05:03.769 00:05:03.769 Suite: memory 00:05:03.769 Test: alloc and free memory map ...[2024-11-20 09:38:40.626599] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:03.769 passed 00:05:03.769 Test: mem map translation ...[2024-11-20 09:38:40.646561] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:03.769 [2024-11-20 09:38:40.646582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:03.769 [2024-11-20 09:38:40.646629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:03.769 [2024-11-20 09:38:40.646641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:04.025 passed 00:05:04.025 Test: mem map registration ...[2024-11-20 09:38:40.692473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:04.025 [2024-11-20 09:38:40.692497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:04.025 passed 00:05:04.025 Test: mem map adjacent registrations ...passed 00:05:04.025 00:05:04.025 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.025 suites 1 1 n/a 0 0 00:05:04.025 tests 4 4 4 0 0 00:05:04.025 asserts 152 152 152 0 n/a 00:05:04.025 00:05:04.025 Elapsed time = 0.152 seconds 00:05:04.025 00:05:04.025 real 0m0.159s 00:05:04.025 user 0m0.150s 00:05:04.025 sys 0m0.008s 00:05:04.025 09:38:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.025 09:38:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:04.026 ************************************ 00:05:04.026 END TEST env_memory 00:05:04.026 ************************************ 00:05:04.026 09:38:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.026 09:38:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.026 09:38:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.026 09:38:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.026 ************************************ 00:05:04.026 START TEST env_vtophys 00:05:04.026 ************************************ 00:05:04.026 09:38:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.026 EAL: lib.eal log level changed from notice to debug 00:05:04.026 EAL: Detected lcore 0 as core 0 on socket 0 00:05:04.026 EAL: Detected lcore 1 as core 1 on socket 0 00:05:04.026 EAL: Detected lcore 2 as core 2 on socket 0 00:05:04.026 EAL: Detected lcore 3 as core 3 on socket 0 00:05:04.026 EAL: Detected lcore 4 as core 4 on socket 0 00:05:04.026 EAL: Detected lcore 5 as core 5 on socket 0 00:05:04.026 EAL: Detected lcore 6 as core 8 on socket 0 00:05:04.026 EAL: Detected lcore 7 as core 9 on socket 0 00:05:04.026 EAL: Detected lcore 8 as core 10 on socket 0 00:05:04.026 EAL: Detected lcore 9 as core 11 on socket 0 00:05:04.026 EAL: Detected lcore 10 as core 12 on socket 0 00:05:04.026 EAL: Detected lcore 11 as core 13 on socket 0 00:05:04.026 EAL: Detected lcore 12 as core 0 on socket 1 00:05:04.026 EAL: Detected lcore 13 as core 1 on socket 1 00:05:04.026 EAL: Detected lcore 14 as core 2 on socket 1 00:05:04.026 EAL: Detected lcore 15 as core 3 on socket 1 00:05:04.026 EAL: Detected lcore 16 as core 4 on socket 1 00:05:04.026 EAL: Detected lcore 17 as core 5 on socket 1 00:05:04.026 EAL: Detected lcore 18 as core 8 on socket 1 00:05:04.026 EAL: Detected lcore 19 as core 9 on socket 1 00:05:04.026 EAL: Detected lcore 20 as core 10 on socket 1 00:05:04.026 EAL: Detected lcore 21 as core 11 on socket 1 00:05:04.026 EAL: Detected lcore 22 as core 12 on socket 1 00:05:04.026 EAL: Detected lcore 23 as core 13 on socket 1 00:05:04.026 EAL: Detected lcore 24 as core 0 on socket 0 00:05:04.026 EAL: Detected lcore 25 as core 1 on socket 0 00:05:04.026 EAL: Detected lcore 26 as core 2 on socket 0 00:05:04.026 EAL: Detected lcore 27 as core 3 on socket 0 00:05:04.026 EAL: Detected lcore 28 as core 4 on socket 0 00:05:04.026 EAL: Detected lcore 29 as core 5 on socket 0 00:05:04.026 EAL: Detected lcore 30 as core 8 on socket 0 00:05:04.026 EAL: Detected lcore 31 as core 9 on socket 0 00:05:04.026 EAL: Detected lcore 32 as core 10 on socket 0 00:05:04.026 EAL: Detected lcore 33 as core 11 on socket 0 00:05:04.026 EAL: Detected lcore 34 as core 12 on socket 0 00:05:04.026 EAL: Detected lcore 35 as core 13 on socket 0 00:05:04.026 EAL: Detected lcore 36 as core 0 on socket 1 00:05:04.026 EAL: Detected lcore 37 as core 1 on socket 1 00:05:04.026 EAL: Detected lcore 38 as core 2 on socket 1 00:05:04.026 EAL: Detected lcore 39 as core 3 on socket 1 00:05:04.026 EAL: Detected lcore 40 as core 4 on socket 1 00:05:04.026 EAL: Detected lcore 41 as core 5 on socket 1 00:05:04.026 EAL: Detected lcore 42 as core 8 on socket 1 00:05:04.026 EAL: Detected lcore 43 as core 9 on socket 1 00:05:04.026 EAL: Detected lcore 44 as core 10 on socket 1 00:05:04.026 EAL: Detected lcore 45 as core 11 on socket 1 00:05:04.026 EAL: Detected lcore 46 as core 12 on socket 1 00:05:04.026 EAL: Detected lcore 47 as core 13 on socket 1 00:05:04.026 EAL: Maximum logical cores by configuration: 128 00:05:04.026 EAL: Detected CPU lcores: 48 00:05:04.026 EAL: Detected NUMA nodes: 2 00:05:04.026 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:04.026 EAL: Detected shared linkage of DPDK 00:05:04.026 EAL: No shared files mode enabled, IPC will be disabled 00:05:04.026 EAL: Bus pci wants IOVA as 'DC' 00:05:04.026 EAL: Buses did not request a specific IOVA mode. 00:05:04.026 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:04.026 EAL: Selected IOVA mode 'VA' 00:05:04.026 EAL: Probing VFIO support... 00:05:04.026 EAL: IOMMU type 1 (Type 1) is supported 00:05:04.026 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:04.026 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:04.026 EAL: VFIO support initialized 00:05:04.026 EAL: Ask a virtual area of 0x2e000 bytes 00:05:04.026 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:04.026 EAL: Setting up physically contiguous memory... 00:05:04.026 EAL: Setting maximum number of open files to 524288 00:05:04.026 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:04.026 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:04.026 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:04.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.026 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:04.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.026 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:04.026 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:04.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.026 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:04.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.026 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:04.026 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:04.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.026 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:04.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.026 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:04.026 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:04.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.026 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:04.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.026 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:04.026 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:04.026 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:04.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.026 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:04.026 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.026 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:04.026 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:04.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.026 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:04.026 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.026 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:04.026 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:04.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.026 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:04.026 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.026 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:04.026 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:04.026 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.026 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:04.026 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.026 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.026 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:04.026 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:04.026 EAL: Hugepages will be freed exactly as allocated. 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: TSC frequency is ~2700000 KHz 00:05:04.026 EAL: Main lcore 0 is ready (tid=7f41fdb79a00;cpuset=[0]) 00:05:04.026 EAL: Trying to obtain current memory policy. 00:05:04.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.026 EAL: Restoring previous memory policy: 0 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was expanded by 2MB 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:04.026 EAL: Mem event callback 'spdk:(nil)' registered 00:05:04.026 00:05:04.026 00:05:04.026 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.026 http://cunit.sourceforge.net/ 00:05:04.026 00:05:04.026 00:05:04.026 Suite: components_suite 00:05:04.026 Test: vtophys_malloc_test ...passed 00:05:04.026 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.026 EAL: Restoring previous memory policy: 4 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.026 EAL: Trying to obtain current memory policy. 00:05:04.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.026 EAL: Restoring previous memory policy: 4 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.026 EAL: Trying to obtain current memory policy. 00:05:04.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.026 EAL: Restoring previous memory policy: 4 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.026 EAL: Trying to obtain current memory policy. 00:05:04.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.026 EAL: Restoring previous memory policy: 4 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.026 EAL: Trying to obtain current memory policy. 00:05:04.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.026 EAL: Restoring previous memory policy: 4 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.026 EAL: Trying to obtain current memory policy. 00:05:04.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.026 EAL: Restoring previous memory policy: 4 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.026 EAL: request: mp_malloc_sync 00:05:04.026 EAL: No shared files mode enabled, IPC is disabled 00:05:04.026 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.026 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.283 EAL: request: mp_malloc_sync 00:05:04.283 EAL: No shared files mode enabled, IPC is disabled 00:05:04.283 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.283 EAL: Trying to obtain current memory policy. 00:05:04.283 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.283 EAL: Restoring previous memory policy: 4 00:05:04.283 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.283 EAL: request: mp_malloc_sync 00:05:04.283 EAL: No shared files mode enabled, IPC is disabled 00:05:04.283 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.283 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.283 EAL: request: mp_malloc_sync 00:05:04.283 EAL: No shared files mode enabled, IPC is disabled 00:05:04.283 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.283 EAL: Trying to obtain current memory policy. 00:05:04.283 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.283 EAL: Restoring previous memory policy: 4 00:05:04.283 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.283 EAL: request: mp_malloc_sync 00:05:04.283 EAL: No shared files mode enabled, IPC is disabled 00:05:04.283 EAL: Heap on socket 0 was expanded by 258MB 00:05:04.283 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.540 EAL: request: mp_malloc_sync 00:05:04.540 EAL: No shared files mode enabled, IPC is disabled 00:05:04.540 EAL: Heap on socket 0 was shrunk by 258MB 00:05:04.540 EAL: Trying to obtain current memory policy. 00:05:04.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.540 EAL: Restoring previous memory policy: 4 00:05:04.540 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.540 EAL: request: mp_malloc_sync 00:05:04.540 EAL: No shared files mode enabled, IPC is disabled 00:05:04.540 EAL: Heap on socket 0 was expanded by 514MB 00:05:04.540 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.797 EAL: request: mp_malloc_sync 00:05:04.797 EAL: No shared files mode enabled, IPC is disabled 00:05:04.797 EAL: Heap on socket 0 was shrunk by 514MB 00:05:04.797 EAL: Trying to obtain current memory policy. 00:05:04.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.056 EAL: Restoring previous memory policy: 4 00:05:05.056 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.056 EAL: request: mp_malloc_sync 00:05:05.056 EAL: No shared files mode enabled, IPC is disabled 00:05:05.056 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.313 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.571 EAL: request: mp_malloc_sync 00:05:05.571 EAL: No shared files mode enabled, IPC is disabled 00:05:05.571 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.571 passed 00:05:05.571 00:05:05.571 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.571 suites 1 1 n/a 0 0 00:05:05.571 tests 2 2 2 0 0 00:05:05.571 asserts 497 497 497 0 n/a 00:05:05.571 00:05:05.571 Elapsed time = 1.360 seconds 00:05:05.571 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.571 EAL: request: mp_malloc_sync 00:05:05.571 EAL: No shared files mode enabled, IPC is disabled 00:05:05.571 EAL: Heap on socket 0 was shrunk by 2MB 00:05:05.571 EAL: No shared files mode enabled, IPC is disabled 00:05:05.571 EAL: No shared files mode enabled, IPC is disabled 00:05:05.571 EAL: No shared files mode enabled, IPC is disabled 00:05:05.571 00:05:05.571 real 0m1.483s 00:05:05.571 user 0m0.861s 00:05:05.571 sys 0m0.583s 00:05:05.571 09:38:42 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.571 09:38:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:05.571 ************************************ 00:05:05.571 END TEST env_vtophys 00:05:05.571 ************************************ 00:05:05.571 09:38:42 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.571 09:38:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.571 09:38:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.571 09:38:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.571 ************************************ 00:05:05.571 START TEST env_pci 00:05:05.571 ************************************ 00:05:05.571 09:38:42 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.571 00:05:05.571 00:05:05.571 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.571 http://cunit.sourceforge.net/ 00:05:05.571 00:05:05.571 00:05:05.571 Suite: pci 00:05:05.571 Test: pci_hook ...[2024-11-20 09:38:42.344838] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3607723 has claimed it 00:05:05.571 EAL: Cannot find device (10000:00:01.0) 00:05:05.571 EAL: Failed to attach device on primary process 00:05:05.571 passed 00:05:05.571 00:05:05.571 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.571 suites 1 1 n/a 0 0 00:05:05.571 tests 1 1 1 0 0 00:05:05.571 asserts 25 25 25 0 n/a 00:05:05.571 00:05:05.571 Elapsed time = 0.021 seconds 00:05:05.571 00:05:05.571 real 0m0.035s 00:05:05.571 user 0m0.013s 00:05:05.571 sys 0m0.022s 00:05:05.571 09:38:42 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.571 09:38:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:05.571 ************************************ 00:05:05.571 END TEST env_pci 00:05:05.571 ************************************ 00:05:05.571 09:38:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:05.571 09:38:42 env -- env/env.sh@15 -- # uname 00:05:05.571 09:38:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:05.571 09:38:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:05.571 09:38:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.571 09:38:42 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:05.571 09:38:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.571 09:38:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.571 ************************************ 00:05:05.571 START TEST env_dpdk_post_init 00:05:05.571 ************************************ 00:05:05.571 09:38:42 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.571 EAL: Detected CPU lcores: 48 00:05:05.571 EAL: Detected NUMA nodes: 2 00:05:05.571 EAL: Detected shared linkage of DPDK 00:05:05.571 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.571 EAL: Selected IOVA mode 'VA' 00:05:05.571 EAL: VFIO support initialized 00:05:05.571 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.830 EAL: Using IOMMU type 1 (Type 1) 00:05:05.830 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:05.830 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:05.830 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:05.830 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:05.830 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:05.830 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:05.830 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:05.830 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:06.767 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:05:06.767 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:06.767 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:06.767 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:06.767 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:06.767 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:06.767 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:06.767 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:06.767 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:10.044 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:05:10.044 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:05:10.044 Starting DPDK initialization... 00:05:10.044 Starting SPDK post initialization... 00:05:10.044 SPDK NVMe probe 00:05:10.044 Attaching to 0000:0b:00.0 00:05:10.044 Attached to 0000:0b:00.0 00:05:10.044 Cleaning up... 00:05:10.044 00:05:10.044 real 0m4.333s 00:05:10.044 user 0m2.977s 00:05:10.044 sys 0m0.418s 00:05:10.044 09:38:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.044 09:38:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.044 ************************************ 00:05:10.044 END TEST env_dpdk_post_init 00:05:10.044 ************************************ 00:05:10.044 09:38:46 env -- env/env.sh@26 -- # uname 00:05:10.044 09:38:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.044 09:38:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.044 09:38:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.044 09:38:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.044 09:38:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.044 ************************************ 00:05:10.044 START TEST env_mem_callbacks 00:05:10.044 ************************************ 00:05:10.044 09:38:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.044 EAL: Detected CPU lcores: 48 00:05:10.044 EAL: Detected NUMA nodes: 2 00:05:10.044 EAL: Detected shared linkage of DPDK 00:05:10.044 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.044 EAL: Selected IOVA mode 'VA' 00:05:10.044 EAL: VFIO support initialized 00:05:10.044 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.044 00:05:10.044 00:05:10.044 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.044 http://cunit.sourceforge.net/ 00:05:10.044 00:05:10.044 00:05:10.044 Suite: memory 00:05:10.044 Test: test ... 00:05:10.044 register 0x200000200000 2097152 00:05:10.044 malloc 3145728 00:05:10.044 register 0x200000400000 4194304 00:05:10.044 buf 0x200000500000 len 3145728 PASSED 00:05:10.044 malloc 64 00:05:10.044 buf 0x2000004fff40 len 64 PASSED 00:05:10.044 malloc 4194304 00:05:10.044 register 0x200000800000 6291456 00:05:10.044 buf 0x200000a00000 len 4194304 PASSED 00:05:10.044 free 0x200000500000 3145728 00:05:10.044 free 0x2000004fff40 64 00:05:10.044 unregister 0x200000400000 4194304 PASSED 00:05:10.044 free 0x200000a00000 4194304 00:05:10.044 unregister 0x200000800000 6291456 PASSED 00:05:10.044 malloc 8388608 00:05:10.044 register 0x200000400000 10485760 00:05:10.044 buf 0x200000600000 len 8388608 PASSED 00:05:10.044 free 0x200000600000 8388608 00:05:10.044 unregister 0x200000400000 10485760 PASSED 00:05:10.044 passed 00:05:10.044 00:05:10.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.044 suites 1 1 n/a 0 0 00:05:10.044 tests 1 1 1 0 0 00:05:10.044 asserts 15 15 15 0 n/a 00:05:10.044 00:05:10.044 Elapsed time = 0.005 seconds 00:05:10.044 00:05:10.044 real 0m0.049s 00:05:10.044 user 0m0.012s 00:05:10.044 sys 0m0.037s 00:05:10.044 09:38:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.044 09:38:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:10.044 ************************************ 00:05:10.044 END TEST env_mem_callbacks 00:05:10.044 ************************************ 00:05:10.044 00:05:10.044 real 0m6.447s 00:05:10.044 user 0m4.206s 00:05:10.044 sys 0m1.285s 00:05:10.044 09:38:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.044 09:38:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.044 ************************************ 00:05:10.044 END TEST env 00:05:10.044 ************************************ 00:05:10.044 09:38:46 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.044 09:38:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.044 09:38:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.044 09:38:46 -- common/autotest_common.sh@10 -- # set +x 00:05:10.044 ************************************ 00:05:10.044 START TEST rpc 00:05:10.044 ************************************ 00:05:10.044 09:38:46 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.302 * Looking for test storage... 00:05:10.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.302 09:38:46 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.302 09:38:46 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.302 09:38:46 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.302 09:38:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.302 09:38:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.302 09:38:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.302 09:38:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.302 09:38:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.302 09:38:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.302 09:38:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.302 09:38:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.302 09:38:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.302 09:38:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.302 09:38:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.302 09:38:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:10.302 09:38:47 rpc -- scripts/common.sh@345 -- # : 1 00:05:10.302 09:38:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.302 09:38:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.302 09:38:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:10.302 09:38:47 rpc -- scripts/common.sh@353 -- # local d=1 00:05:10.302 09:38:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.302 09:38:47 rpc -- scripts/common.sh@355 -- # echo 1 00:05:10.302 09:38:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.302 09:38:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:10.302 09:38:47 rpc -- scripts/common.sh@353 -- # local d=2 00:05:10.302 09:38:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.302 09:38:47 rpc -- scripts/common.sh@355 -- # echo 2 00:05:10.302 09:38:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.302 09:38:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.302 09:38:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.302 09:38:47 rpc -- scripts/common.sh@368 -- # return 0 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.302 --rc genhtml_branch_coverage=1 00:05:10.302 --rc genhtml_function_coverage=1 00:05:10.302 --rc genhtml_legend=1 00:05:10.302 --rc geninfo_all_blocks=1 00:05:10.302 --rc geninfo_unexecuted_blocks=1 00:05:10.302 00:05:10.302 ' 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.302 --rc genhtml_branch_coverage=1 00:05:10.302 --rc genhtml_function_coverage=1 00:05:10.302 --rc genhtml_legend=1 00:05:10.302 --rc geninfo_all_blocks=1 00:05:10.302 --rc geninfo_unexecuted_blocks=1 00:05:10.302 00:05:10.302 ' 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.302 --rc genhtml_branch_coverage=1 00:05:10.302 --rc genhtml_function_coverage=1 00:05:10.302 --rc genhtml_legend=1 00:05:10.302 --rc geninfo_all_blocks=1 00:05:10.302 --rc geninfo_unexecuted_blocks=1 00:05:10.302 00:05:10.302 ' 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.302 --rc genhtml_branch_coverage=1 00:05:10.302 --rc genhtml_function_coverage=1 00:05:10.302 --rc genhtml_legend=1 00:05:10.302 --rc geninfo_all_blocks=1 00:05:10.302 --rc geninfo_unexecuted_blocks=1 00:05:10.302 00:05:10.302 ' 00:05:10.302 09:38:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3608459 00:05:10.302 09:38:47 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:10.302 09:38:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.302 09:38:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3608459 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@835 -- # '[' -z 3608459 ']' 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.302 09:38:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.302 [2024-11-20 09:38:47.118643] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:10.302 [2024-11-20 09:38:47.118768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608459 ] 00:05:10.302 [2024-11-20 09:38:47.188556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.560 [2024-11-20 09:38:47.249646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:10.560 [2024-11-20 09:38:47.249700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3608459' to capture a snapshot of events at runtime. 00:05:10.560 [2024-11-20 09:38:47.249713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.560 [2024-11-20 09:38:47.249724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.560 [2024-11-20 09:38:47.249733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3608459 for offline analysis/debug. 00:05:10.560 [2024-11-20 09:38:47.250327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.818 09:38:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.818 09:38:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:10.818 09:38:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.818 09:38:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.818 09:38:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:10.818 09:38:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:10.818 09:38:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.818 09:38:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.818 09:38:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.818 ************************************ 00:05:10.818 START TEST rpc_integrity 00:05:10.818 ************************************ 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.818 { 00:05:10.818 "name": "Malloc0", 00:05:10.818 "aliases": [ 00:05:10.818 "9b9e731c-49d0-47e9-bfb0-20a5f778de54" 00:05:10.818 ], 00:05:10.818 "product_name": "Malloc disk", 00:05:10.818 "block_size": 512, 00:05:10.818 "num_blocks": 16384, 00:05:10.818 "uuid": "9b9e731c-49d0-47e9-bfb0-20a5f778de54", 00:05:10.818 "assigned_rate_limits": { 00:05:10.818 "rw_ios_per_sec": 0, 00:05:10.818 "rw_mbytes_per_sec": 0, 00:05:10.818 "r_mbytes_per_sec": 0, 00:05:10.818 "w_mbytes_per_sec": 0 00:05:10.818 }, 00:05:10.818 "claimed": false, 00:05:10.818 "zoned": false, 00:05:10.818 "supported_io_types": { 00:05:10.818 "read": true, 00:05:10.818 "write": true, 00:05:10.818 "unmap": true, 00:05:10.818 "flush": true, 00:05:10.818 "reset": true, 00:05:10.818 "nvme_admin": false, 00:05:10.818 "nvme_io": false, 00:05:10.818 "nvme_io_md": false, 00:05:10.818 "write_zeroes": true, 00:05:10.818 "zcopy": true, 00:05:10.818 "get_zone_info": false, 00:05:10.818 "zone_management": false, 00:05:10.818 "zone_append": false, 00:05:10.818 "compare": false, 00:05:10.818 "compare_and_write": false, 00:05:10.818 "abort": true, 00:05:10.818 "seek_hole": false, 00:05:10.818 "seek_data": false, 00:05:10.818 "copy": true, 00:05:10.818 "nvme_iov_md": false 00:05:10.818 }, 00:05:10.818 "memory_domains": [ 00:05:10.818 { 00:05:10.818 "dma_device_id": "system", 00:05:10.818 "dma_device_type": 1 00:05:10.818 }, 00:05:10.818 { 00:05:10.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.818 "dma_device_type": 2 00:05:10.818 } 00:05:10.818 ], 00:05:10.818 "driver_specific": {} 00:05:10.818 } 00:05:10.818 ]' 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.818 [2024-11-20 09:38:47.651260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:10.818 [2024-11-20 09:38:47.651323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.818 [2024-11-20 09:38:47.651373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x245e750 00:05:10.818 [2024-11-20 09:38:47.651388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.818 [2024-11-20 09:38:47.652762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.818 [2024-11-20 09:38:47.652785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.818 Passthru0 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.818 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.818 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.818 { 00:05:10.818 "name": "Malloc0", 00:05:10.818 "aliases": [ 00:05:10.818 "9b9e731c-49d0-47e9-bfb0-20a5f778de54" 00:05:10.818 ], 00:05:10.818 "product_name": "Malloc disk", 00:05:10.818 "block_size": 512, 00:05:10.818 "num_blocks": 16384, 00:05:10.818 "uuid": "9b9e731c-49d0-47e9-bfb0-20a5f778de54", 00:05:10.818 "assigned_rate_limits": { 00:05:10.818 "rw_ios_per_sec": 0, 00:05:10.818 "rw_mbytes_per_sec": 0, 00:05:10.818 "r_mbytes_per_sec": 0, 00:05:10.818 "w_mbytes_per_sec": 0 00:05:10.818 }, 00:05:10.818 "claimed": true, 00:05:10.818 "claim_type": "exclusive_write", 00:05:10.818 "zoned": false, 00:05:10.818 "supported_io_types": { 00:05:10.818 "read": true, 00:05:10.818 "write": true, 00:05:10.818 "unmap": true, 00:05:10.818 "flush": true, 00:05:10.818 "reset": true, 00:05:10.818 "nvme_admin": false, 00:05:10.818 "nvme_io": false, 00:05:10.818 "nvme_io_md": false, 00:05:10.818 "write_zeroes": true, 00:05:10.818 "zcopy": true, 00:05:10.818 "get_zone_info": false, 00:05:10.818 "zone_management": false, 00:05:10.818 "zone_append": false, 00:05:10.818 "compare": false, 00:05:10.818 "compare_and_write": false, 00:05:10.818 "abort": true, 00:05:10.818 "seek_hole": false, 00:05:10.818 "seek_data": false, 00:05:10.818 "copy": true, 00:05:10.818 "nvme_iov_md": false 00:05:10.818 }, 00:05:10.818 "memory_domains": [ 00:05:10.818 { 00:05:10.818 "dma_device_id": "system", 00:05:10.818 "dma_device_type": 1 00:05:10.818 }, 00:05:10.818 { 00:05:10.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.818 "dma_device_type": 2 00:05:10.818 } 00:05:10.818 ], 00:05:10.818 "driver_specific": {} 00:05:10.818 }, 00:05:10.818 { 00:05:10.818 "name": "Passthru0", 00:05:10.818 "aliases": [ 00:05:10.818 "d94fe93b-ee2d-5af1-a0e4-e735102e5690" 00:05:10.818 ], 00:05:10.818 "product_name": "passthru", 00:05:10.818 "block_size": 512, 00:05:10.818 "num_blocks": 16384, 00:05:10.818 "uuid": "d94fe93b-ee2d-5af1-a0e4-e735102e5690", 00:05:10.818 "assigned_rate_limits": { 00:05:10.818 "rw_ios_per_sec": 0, 00:05:10.818 "rw_mbytes_per_sec": 0, 00:05:10.818 "r_mbytes_per_sec": 0, 00:05:10.818 "w_mbytes_per_sec": 0 00:05:10.818 }, 00:05:10.818 "claimed": false, 00:05:10.818 "zoned": false, 00:05:10.818 "supported_io_types": { 00:05:10.818 "read": true, 00:05:10.818 "write": true, 00:05:10.818 "unmap": true, 00:05:10.818 "flush": true, 00:05:10.818 "reset": true, 00:05:10.818 "nvme_admin": false, 00:05:10.818 "nvme_io": false, 00:05:10.818 "nvme_io_md": false, 00:05:10.818 "write_zeroes": true, 00:05:10.818 "zcopy": true, 00:05:10.818 "get_zone_info": false, 00:05:10.819 "zone_management": false, 00:05:10.819 "zone_append": false, 00:05:10.819 "compare": false, 00:05:10.819 "compare_and_write": false, 00:05:10.819 "abort": true, 00:05:10.819 "seek_hole": false, 00:05:10.819 "seek_data": false, 00:05:10.819 "copy": true, 00:05:10.819 "nvme_iov_md": false 00:05:10.819 }, 00:05:10.819 "memory_domains": [ 00:05:10.819 { 00:05:10.819 "dma_device_id": "system", 00:05:10.819 "dma_device_type": 1 00:05:10.819 }, 00:05:10.819 { 00:05:10.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.819 "dma_device_type": 2 00:05:10.819 } 00:05:10.819 ], 00:05:10.819 "driver_specific": { 00:05:10.819 "passthru": { 00:05:10.819 "name": "Passthru0", 00:05:10.819 "base_bdev_name": "Malloc0" 00:05:10.819 } 00:05:10.819 } 00:05:10.819 } 00:05:10.819 ]' 00:05:10.819 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.819 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.819 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.819 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.819 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.819 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.819 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.819 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.077 09:38:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.077 00:05:11.077 real 0m0.218s 00:05:11.077 user 0m0.141s 00:05:11.077 sys 0m0.021s 00:05:11.077 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.077 09:38:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.077 ************************************ 00:05:11.077 END TEST rpc_integrity 00:05:11.077 ************************************ 00:05:11.077 09:38:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:11.077 09:38:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.077 09:38:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.077 09:38:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.077 ************************************ 00:05:11.077 START TEST rpc_plugins 00:05:11.077 ************************************ 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:11.077 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.077 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:11.077 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.077 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:11.077 { 00:05:11.077 "name": "Malloc1", 00:05:11.077 "aliases": [ 00:05:11.077 "0d942de3-b343-4d8a-9932-a927492eced2" 00:05:11.077 ], 00:05:11.077 "product_name": "Malloc disk", 00:05:11.077 "block_size": 4096, 00:05:11.077 "num_blocks": 256, 00:05:11.077 "uuid": "0d942de3-b343-4d8a-9932-a927492eced2", 00:05:11.077 "assigned_rate_limits": { 00:05:11.077 "rw_ios_per_sec": 0, 00:05:11.077 "rw_mbytes_per_sec": 0, 00:05:11.077 "r_mbytes_per_sec": 0, 00:05:11.077 "w_mbytes_per_sec": 0 00:05:11.077 }, 00:05:11.077 "claimed": false, 00:05:11.077 "zoned": false, 00:05:11.077 "supported_io_types": { 00:05:11.077 "read": true, 00:05:11.077 "write": true, 00:05:11.077 "unmap": true, 00:05:11.077 "flush": true, 00:05:11.077 "reset": true, 00:05:11.077 "nvme_admin": false, 00:05:11.077 "nvme_io": false, 00:05:11.077 "nvme_io_md": false, 00:05:11.077 "write_zeroes": true, 00:05:11.077 "zcopy": true, 00:05:11.077 "get_zone_info": false, 00:05:11.077 "zone_management": false, 00:05:11.077 "zone_append": false, 00:05:11.077 "compare": false, 00:05:11.077 "compare_and_write": false, 00:05:11.077 "abort": true, 00:05:11.077 "seek_hole": false, 00:05:11.077 "seek_data": false, 00:05:11.077 "copy": true, 00:05:11.077 "nvme_iov_md": false 00:05:11.077 }, 00:05:11.077 "memory_domains": [ 00:05:11.077 { 00:05:11.077 "dma_device_id": "system", 00:05:11.077 "dma_device_type": 1 00:05:11.077 }, 00:05:11.077 { 00:05:11.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.077 "dma_device_type": 2 00:05:11.077 } 00:05:11.077 ], 00:05:11.077 "driver_specific": {} 00:05:11.077 } 00:05:11.077 ]' 00:05:11.077 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:11.077 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:11.077 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.077 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.077 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.078 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:11.078 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:11.078 09:38:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:11.078 00:05:11.078 real 0m0.105s 00:05:11.078 user 0m0.069s 00:05:11.078 sys 0m0.008s 00:05:11.078 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.078 09:38:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.078 ************************************ 00:05:11.078 END TEST rpc_plugins 00:05:11.078 ************************************ 00:05:11.078 09:38:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:11.078 09:38:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.078 09:38:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.078 09:38:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.078 ************************************ 00:05:11.078 START TEST rpc_trace_cmd_test 00:05:11.078 ************************************ 00:05:11.078 09:38:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:11.078 09:38:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:11.078 09:38:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:11.078 09:38:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.078 09:38:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.078 09:38:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.078 09:38:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:11.078 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3608459", 00:05:11.078 "tpoint_group_mask": "0x8", 00:05:11.078 "iscsi_conn": { 00:05:11.078 "mask": "0x2", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "scsi": { 00:05:11.078 "mask": "0x4", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "bdev": { 00:05:11.078 "mask": "0x8", 00:05:11.078 "tpoint_mask": "0xffffffffffffffff" 00:05:11.078 }, 00:05:11.078 "nvmf_rdma": { 00:05:11.078 "mask": "0x10", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "nvmf_tcp": { 00:05:11.078 "mask": "0x20", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "ftl": { 00:05:11.078 "mask": "0x40", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "blobfs": { 00:05:11.078 "mask": "0x80", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "dsa": { 00:05:11.078 "mask": "0x200", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "thread": { 00:05:11.078 "mask": "0x400", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "nvme_pcie": { 00:05:11.078 "mask": "0x800", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "iaa": { 00:05:11.078 "mask": "0x1000", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "nvme_tcp": { 00:05:11.078 "mask": "0x2000", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "bdev_nvme": { 00:05:11.078 "mask": "0x4000", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "sock": { 00:05:11.078 "mask": "0x8000", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "blob": { 00:05:11.078 "mask": "0x10000", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "bdev_raid": { 00:05:11.078 "mask": "0x20000", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 }, 00:05:11.078 "scheduler": { 00:05:11.078 "mask": "0x40000", 00:05:11.078 "tpoint_mask": "0x0" 00:05:11.078 } 00:05:11.078 }' 00:05:11.078 09:38:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:11.337 00:05:11.337 real 0m0.176s 00:05:11.337 user 0m0.152s 00:05:11.337 sys 0m0.019s 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.337 09:38:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.337 ************************************ 00:05:11.337 END TEST rpc_trace_cmd_test 00:05:11.337 ************************************ 00:05:11.337 09:38:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:11.337 09:38:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:11.337 09:38:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:11.337 09:38:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.337 09:38:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.337 09:38:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.337 ************************************ 00:05:11.337 START TEST rpc_daemon_integrity 00:05:11.337 ************************************ 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.337 { 00:05:11.337 "name": "Malloc2", 00:05:11.337 "aliases": [ 00:05:11.337 "6106a41e-1b72-4299-bd9b-42bc706ca014" 00:05:11.337 ], 00:05:11.337 "product_name": "Malloc disk", 00:05:11.337 "block_size": 512, 00:05:11.337 "num_blocks": 16384, 00:05:11.337 "uuid": "6106a41e-1b72-4299-bd9b-42bc706ca014", 00:05:11.337 "assigned_rate_limits": { 00:05:11.337 "rw_ios_per_sec": 0, 00:05:11.337 "rw_mbytes_per_sec": 0, 00:05:11.337 "r_mbytes_per_sec": 0, 00:05:11.337 "w_mbytes_per_sec": 0 00:05:11.337 }, 00:05:11.337 "claimed": false, 00:05:11.337 "zoned": false, 00:05:11.337 "supported_io_types": { 00:05:11.337 "read": true, 00:05:11.337 "write": true, 00:05:11.337 "unmap": true, 00:05:11.337 "flush": true, 00:05:11.337 "reset": true, 00:05:11.337 "nvme_admin": false, 00:05:11.337 "nvme_io": false, 00:05:11.337 "nvme_io_md": false, 00:05:11.337 "write_zeroes": true, 00:05:11.337 "zcopy": true, 00:05:11.337 "get_zone_info": false, 00:05:11.337 "zone_management": false, 00:05:11.337 "zone_append": false, 00:05:11.337 "compare": false, 00:05:11.337 "compare_and_write": false, 00:05:11.337 "abort": true, 00:05:11.337 "seek_hole": false, 00:05:11.337 "seek_data": false, 00:05:11.337 "copy": true, 00:05:11.337 "nvme_iov_md": false 00:05:11.337 }, 00:05:11.337 "memory_domains": [ 00:05:11.337 { 00:05:11.337 "dma_device_id": "system", 00:05:11.337 "dma_device_type": 1 00:05:11.337 }, 00:05:11.337 { 00:05:11.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.337 "dma_device_type": 2 00:05:11.337 } 00:05:11.337 ], 00:05:11.337 "driver_specific": {} 00:05:11.337 } 00:05:11.337 ]' 00:05:11.337 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.595 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.595 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:11.595 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.596 [2024-11-20 09:38:48.277443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:11.596 [2024-11-20 09:38:48.277485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.596 [2024-11-20 09:38:48.277508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24ef200 00:05:11.596 [2024-11-20 09:38:48.277522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.596 [2024-11-20 09:38:48.278722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.596 [2024-11-20 09:38:48.278752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.596 Passthru0 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.596 { 00:05:11.596 "name": "Malloc2", 00:05:11.596 "aliases": [ 00:05:11.596 "6106a41e-1b72-4299-bd9b-42bc706ca014" 00:05:11.596 ], 00:05:11.596 "product_name": "Malloc disk", 00:05:11.596 "block_size": 512, 00:05:11.596 "num_blocks": 16384, 00:05:11.596 "uuid": "6106a41e-1b72-4299-bd9b-42bc706ca014", 00:05:11.596 "assigned_rate_limits": { 00:05:11.596 "rw_ios_per_sec": 0, 00:05:11.596 "rw_mbytes_per_sec": 0, 00:05:11.596 "r_mbytes_per_sec": 0, 00:05:11.596 "w_mbytes_per_sec": 0 00:05:11.596 }, 00:05:11.596 "claimed": true, 00:05:11.596 "claim_type": "exclusive_write", 00:05:11.596 "zoned": false, 00:05:11.596 "supported_io_types": { 00:05:11.596 "read": true, 00:05:11.596 "write": true, 00:05:11.596 "unmap": true, 00:05:11.596 "flush": true, 00:05:11.596 "reset": true, 00:05:11.596 "nvme_admin": false, 00:05:11.596 "nvme_io": false, 00:05:11.596 "nvme_io_md": false, 00:05:11.596 "write_zeroes": true, 00:05:11.596 "zcopy": true, 00:05:11.596 "get_zone_info": false, 00:05:11.596 "zone_management": false, 00:05:11.596 "zone_append": false, 00:05:11.596 "compare": false, 00:05:11.596 "compare_and_write": false, 00:05:11.596 "abort": true, 00:05:11.596 "seek_hole": false, 00:05:11.596 "seek_data": false, 00:05:11.596 "copy": true, 00:05:11.596 "nvme_iov_md": false 00:05:11.596 }, 00:05:11.596 "memory_domains": [ 00:05:11.596 { 00:05:11.596 "dma_device_id": "system", 00:05:11.596 "dma_device_type": 1 00:05:11.596 }, 00:05:11.596 { 00:05:11.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.596 "dma_device_type": 2 00:05:11.596 } 00:05:11.596 ], 00:05:11.596 "driver_specific": {} 00:05:11.596 }, 00:05:11.596 { 00:05:11.596 "name": "Passthru0", 00:05:11.596 "aliases": [ 00:05:11.596 "00ffa39f-e2cb-5eef-8312-6f31d82f153a" 00:05:11.596 ], 00:05:11.596 "product_name": "passthru", 00:05:11.596 "block_size": 512, 00:05:11.596 "num_blocks": 16384, 00:05:11.596 "uuid": "00ffa39f-e2cb-5eef-8312-6f31d82f153a", 00:05:11.596 "assigned_rate_limits": { 00:05:11.596 "rw_ios_per_sec": 0, 00:05:11.596 "rw_mbytes_per_sec": 0, 00:05:11.596 "r_mbytes_per_sec": 0, 00:05:11.596 "w_mbytes_per_sec": 0 00:05:11.596 }, 00:05:11.596 "claimed": false, 00:05:11.596 "zoned": false, 00:05:11.596 "supported_io_types": { 00:05:11.596 "read": true, 00:05:11.596 "write": true, 00:05:11.596 "unmap": true, 00:05:11.596 "flush": true, 00:05:11.596 "reset": true, 00:05:11.596 "nvme_admin": false, 00:05:11.596 "nvme_io": false, 00:05:11.596 "nvme_io_md": false, 00:05:11.596 "write_zeroes": true, 00:05:11.596 "zcopy": true, 00:05:11.596 "get_zone_info": false, 00:05:11.596 "zone_management": false, 00:05:11.596 "zone_append": false, 00:05:11.596 "compare": false, 00:05:11.596 "compare_and_write": false, 00:05:11.596 "abort": true, 00:05:11.596 "seek_hole": false, 00:05:11.596 "seek_data": false, 00:05:11.596 "copy": true, 00:05:11.596 "nvme_iov_md": false 00:05:11.596 }, 00:05:11.596 "memory_domains": [ 00:05:11.596 { 00:05:11.596 "dma_device_id": "system", 00:05:11.596 "dma_device_type": 1 00:05:11.596 }, 00:05:11.596 { 00:05:11.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.596 "dma_device_type": 2 00:05:11.596 } 00:05:11.596 ], 00:05:11.596 "driver_specific": { 00:05:11.596 "passthru": { 00:05:11.596 "name": "Passthru0", 00:05:11.596 "base_bdev_name": "Malloc2" 00:05:11.596 } 00:05:11.596 } 00:05:11.596 } 00:05:11.596 ]' 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.596 00:05:11.596 real 0m0.210s 00:05:11.596 user 0m0.139s 00:05:11.596 sys 0m0.016s 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.596 09:38:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.596 ************************************ 00:05:11.596 END TEST rpc_daemon_integrity 00:05:11.596 ************************************ 00:05:11.596 09:38:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.596 09:38:48 rpc -- rpc/rpc.sh@84 -- # killprocess 3608459 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 3608459 ']' 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@958 -- # kill -0 3608459 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608459 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608459' 00:05:11.596 killing process with pid 3608459 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@973 -- # kill 3608459 00:05:11.596 09:38:48 rpc -- common/autotest_common.sh@978 -- # wait 3608459 00:05:12.162 00:05:12.162 real 0m1.951s 00:05:12.162 user 0m2.389s 00:05:12.162 sys 0m0.617s 00:05:12.162 09:38:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.162 09:38:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.162 ************************************ 00:05:12.162 END TEST rpc 00:05:12.162 ************************************ 00:05:12.162 09:38:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:12.162 09:38:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.162 09:38:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.162 09:38:48 -- common/autotest_common.sh@10 -- # set +x 00:05:12.162 ************************************ 00:05:12.162 START TEST skip_rpc 00:05:12.162 ************************************ 00:05:12.162 09:38:48 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:12.162 * Looking for test storage... 00:05:12.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.162 09:38:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.162 09:38:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.162 09:38:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.162 09:38:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.162 --rc genhtml_branch_coverage=1 00:05:12.162 --rc genhtml_function_coverage=1 00:05:12.162 --rc genhtml_legend=1 00:05:12.162 --rc geninfo_all_blocks=1 00:05:12.162 --rc geninfo_unexecuted_blocks=1 00:05:12.162 00:05:12.162 ' 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.162 --rc genhtml_branch_coverage=1 00:05:12.162 --rc genhtml_function_coverage=1 00:05:12.162 --rc genhtml_legend=1 00:05:12.162 --rc geninfo_all_blocks=1 00:05:12.162 --rc geninfo_unexecuted_blocks=1 00:05:12.162 00:05:12.162 ' 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.162 --rc genhtml_branch_coverage=1 00:05:12.162 --rc genhtml_function_coverage=1 00:05:12.162 --rc genhtml_legend=1 00:05:12.162 --rc geninfo_all_blocks=1 00:05:12.162 --rc geninfo_unexecuted_blocks=1 00:05:12.162 00:05:12.162 ' 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.162 --rc genhtml_branch_coverage=1 00:05:12.162 --rc genhtml_function_coverage=1 00:05:12.162 --rc genhtml_legend=1 00:05:12.162 --rc geninfo_all_blocks=1 00:05:12.162 --rc geninfo_unexecuted_blocks=1 00:05:12.162 00:05:12.162 ' 00:05:12.162 09:38:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.162 09:38:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.162 09:38:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.162 09:38:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.422 ************************************ 00:05:12.422 START TEST skip_rpc 00:05:12.422 ************************************ 00:05:12.422 09:38:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:12.422 09:38:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3608841 00:05:12.422 09:38:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.422 09:38:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:12.422 09:38:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:12.422 [2024-11-20 09:38:49.140885] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:12.422 [2024-11-20 09:38:49.140947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608841 ] 00:05:12.422 [2024-11-20 09:38:49.201544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.422 [2024-11-20 09:38:49.259776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3608841 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3608841 ']' 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3608841 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608841 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608841' 00:05:17.683 killing process with pid 3608841 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3608841 00:05:17.683 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3608841 00:05:17.683 00:05:17.683 real 0m5.444s 00:05:17.683 user 0m5.159s 00:05:17.683 sys 0m0.293s 00:05:17.684 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.684 09:38:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.684 ************************************ 00:05:17.684 END TEST skip_rpc 00:05:17.684 ************************************ 00:05:17.684 09:38:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:17.684 09:38:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.684 09:38:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.684 09:38:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.684 ************************************ 00:05:17.684 START TEST skip_rpc_with_json 00:05:17.684 ************************************ 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3609534 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3609534 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3609534 ']' 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.684 09:38:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.942 [2024-11-20 09:38:54.642353] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:17.942 [2024-11-20 09:38:54.642467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609534 ] 00:05:17.942 [2024-11-20 09:38:54.707708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.942 [2024-11-20 09:38:54.761416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.200 [2024-11-20 09:38:55.022881] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:18.200 request: 00:05:18.200 { 00:05:18.200 "trtype": "tcp", 00:05:18.200 "method": "nvmf_get_transports", 00:05:18.200 "req_id": 1 00:05:18.200 } 00:05:18.200 Got JSON-RPC error response 00:05:18.200 response: 00:05:18.200 { 00:05:18.200 "code": -19, 00:05:18.200 "message": "No such device" 00:05:18.200 } 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.200 [2024-11-20 09:38:55.030987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.200 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.459 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.459 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.459 { 00:05:18.459 "subsystems": [ 00:05:18.459 { 00:05:18.459 "subsystem": "fsdev", 00:05:18.459 "config": [ 00:05:18.459 { 00:05:18.459 "method": "fsdev_set_opts", 00:05:18.459 "params": { 00:05:18.459 "fsdev_io_pool_size": 65535, 00:05:18.459 "fsdev_io_cache_size": 256 00:05:18.459 } 00:05:18.459 } 00:05:18.459 ] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "vfio_user_target", 00:05:18.459 "config": null 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "keyring", 00:05:18.459 "config": [] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "iobuf", 00:05:18.459 "config": [ 00:05:18.459 { 00:05:18.459 "method": "iobuf_set_options", 00:05:18.459 "params": { 00:05:18.459 "small_pool_count": 8192, 00:05:18.459 "large_pool_count": 1024, 00:05:18.459 "small_bufsize": 8192, 00:05:18.459 "large_bufsize": 135168, 00:05:18.459 "enable_numa": false 00:05:18.459 } 00:05:18.459 } 00:05:18.459 ] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "sock", 00:05:18.459 "config": [ 00:05:18.459 { 00:05:18.459 "method": "sock_set_default_impl", 00:05:18.459 "params": { 00:05:18.459 "impl_name": "posix" 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "sock_impl_set_options", 00:05:18.459 "params": { 00:05:18.459 "impl_name": "ssl", 00:05:18.459 "recv_buf_size": 4096, 00:05:18.459 "send_buf_size": 4096, 00:05:18.459 "enable_recv_pipe": true, 00:05:18.459 "enable_quickack": false, 00:05:18.459 "enable_placement_id": 0, 00:05:18.459 "enable_zerocopy_send_server": true, 00:05:18.459 "enable_zerocopy_send_client": false, 00:05:18.459 "zerocopy_threshold": 0, 00:05:18.459 "tls_version": 0, 00:05:18.459 "enable_ktls": false 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "sock_impl_set_options", 00:05:18.459 "params": { 00:05:18.459 "impl_name": "posix", 00:05:18.459 "recv_buf_size": 2097152, 00:05:18.459 "send_buf_size": 2097152, 00:05:18.459 "enable_recv_pipe": true, 00:05:18.459 "enable_quickack": false, 00:05:18.459 "enable_placement_id": 0, 00:05:18.459 "enable_zerocopy_send_server": true, 00:05:18.459 "enable_zerocopy_send_client": false, 00:05:18.459 "zerocopy_threshold": 0, 00:05:18.459 "tls_version": 0, 00:05:18.459 "enable_ktls": false 00:05:18.459 } 00:05:18.459 } 00:05:18.459 ] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "vmd", 00:05:18.459 "config": [] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "accel", 00:05:18.459 "config": [ 00:05:18.459 { 00:05:18.459 "method": "accel_set_options", 00:05:18.459 "params": { 00:05:18.459 "small_cache_size": 128, 00:05:18.459 "large_cache_size": 16, 00:05:18.459 "task_count": 2048, 00:05:18.459 "sequence_count": 2048, 00:05:18.459 "buf_count": 2048 00:05:18.459 } 00:05:18.459 } 00:05:18.459 ] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "bdev", 00:05:18.459 "config": [ 00:05:18.459 { 00:05:18.459 "method": "bdev_set_options", 00:05:18.459 "params": { 00:05:18.459 "bdev_io_pool_size": 65535, 00:05:18.459 "bdev_io_cache_size": 256, 00:05:18.459 "bdev_auto_examine": true, 00:05:18.459 "iobuf_small_cache_size": 128, 00:05:18.459 "iobuf_large_cache_size": 16 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "bdev_raid_set_options", 00:05:18.459 "params": { 00:05:18.459 "process_window_size_kb": 1024, 00:05:18.459 "process_max_bandwidth_mb_sec": 0 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "bdev_iscsi_set_options", 00:05:18.459 "params": { 00:05:18.459 "timeout_sec": 30 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "bdev_nvme_set_options", 00:05:18.459 "params": { 00:05:18.459 "action_on_timeout": "none", 00:05:18.459 "timeout_us": 0, 00:05:18.459 "timeout_admin_us": 0, 00:05:18.459 "keep_alive_timeout_ms": 10000, 00:05:18.459 "arbitration_burst": 0, 00:05:18.459 "low_priority_weight": 0, 00:05:18.459 "medium_priority_weight": 0, 00:05:18.459 "high_priority_weight": 0, 00:05:18.459 "nvme_adminq_poll_period_us": 10000, 00:05:18.459 "nvme_ioq_poll_period_us": 0, 00:05:18.459 "io_queue_requests": 0, 00:05:18.459 "delay_cmd_submit": true, 00:05:18.459 "transport_retry_count": 4, 00:05:18.459 "bdev_retry_count": 3, 00:05:18.459 "transport_ack_timeout": 0, 00:05:18.459 "ctrlr_loss_timeout_sec": 0, 00:05:18.459 "reconnect_delay_sec": 0, 00:05:18.459 "fast_io_fail_timeout_sec": 0, 00:05:18.459 "disable_auto_failback": false, 00:05:18.459 "generate_uuids": false, 00:05:18.459 "transport_tos": 0, 00:05:18.459 "nvme_error_stat": false, 00:05:18.459 "rdma_srq_size": 0, 00:05:18.459 "io_path_stat": false, 00:05:18.459 "allow_accel_sequence": false, 00:05:18.459 "rdma_max_cq_size": 0, 00:05:18.459 "rdma_cm_event_timeout_ms": 0, 00:05:18.459 "dhchap_digests": [ 00:05:18.459 "sha256", 00:05:18.459 "sha384", 00:05:18.459 "sha512" 00:05:18.459 ], 00:05:18.459 "dhchap_dhgroups": [ 00:05:18.459 "null", 00:05:18.459 "ffdhe2048", 00:05:18.459 "ffdhe3072", 00:05:18.459 "ffdhe4096", 00:05:18.459 "ffdhe6144", 00:05:18.459 "ffdhe8192" 00:05:18.459 ] 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "bdev_nvme_set_hotplug", 00:05:18.459 "params": { 00:05:18.459 "period_us": 100000, 00:05:18.459 "enable": false 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "bdev_wait_for_examine" 00:05:18.459 } 00:05:18.459 ] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "scsi", 00:05:18.459 "config": null 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "scheduler", 00:05:18.459 "config": [ 00:05:18.459 { 00:05:18.459 "method": "framework_set_scheduler", 00:05:18.459 "params": { 00:05:18.459 "name": "static" 00:05:18.459 } 00:05:18.459 } 00:05:18.459 ] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "vhost_scsi", 00:05:18.459 "config": [] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "vhost_blk", 00:05:18.459 "config": [] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "ublk", 00:05:18.459 "config": [] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "nbd", 00:05:18.459 "config": [] 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "subsystem": "nvmf", 00:05:18.459 "config": [ 00:05:18.459 { 00:05:18.459 "method": "nvmf_set_config", 00:05:18.459 "params": { 00:05:18.459 "discovery_filter": "match_any", 00:05:18.459 "admin_cmd_passthru": { 00:05:18.459 "identify_ctrlr": false 00:05:18.459 }, 00:05:18.459 "dhchap_digests": [ 00:05:18.459 "sha256", 00:05:18.459 "sha384", 00:05:18.459 "sha512" 00:05:18.459 ], 00:05:18.459 "dhchap_dhgroups": [ 00:05:18.459 "null", 00:05:18.459 "ffdhe2048", 00:05:18.459 "ffdhe3072", 00:05:18.459 "ffdhe4096", 00:05:18.459 "ffdhe6144", 00:05:18.459 "ffdhe8192" 00:05:18.459 ] 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "nvmf_set_max_subsystems", 00:05:18.459 "params": { 00:05:18.459 "max_subsystems": 1024 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "nvmf_set_crdt", 00:05:18.459 "params": { 00:05:18.459 "crdt1": 0, 00:05:18.459 "crdt2": 0, 00:05:18.459 "crdt3": 0 00:05:18.459 } 00:05:18.459 }, 00:05:18.459 { 00:05:18.459 "method": "nvmf_create_transport", 00:05:18.459 "params": { 00:05:18.459 "trtype": "TCP", 00:05:18.459 "max_queue_depth": 128, 00:05:18.459 "max_io_qpairs_per_ctrlr": 127, 00:05:18.459 "in_capsule_data_size": 4096, 00:05:18.459 "max_io_size": 131072, 00:05:18.459 "io_unit_size": 131072, 00:05:18.459 "max_aq_depth": 128, 00:05:18.459 "num_shared_buffers": 511, 00:05:18.459 "buf_cache_size": 4294967295, 00:05:18.459 "dif_insert_or_strip": false, 00:05:18.459 "zcopy": false, 00:05:18.459 "c2h_success": true, 00:05:18.460 "sock_priority": 0, 00:05:18.460 "abort_timeout_sec": 1, 00:05:18.460 "ack_timeout": 0, 00:05:18.460 "data_wr_pool_size": 0 00:05:18.460 } 00:05:18.460 } 00:05:18.460 ] 00:05:18.460 }, 00:05:18.460 { 00:05:18.460 "subsystem": "iscsi", 00:05:18.460 "config": [ 00:05:18.460 { 00:05:18.460 "method": "iscsi_set_options", 00:05:18.460 "params": { 00:05:18.460 "node_base": "iqn.2016-06.io.spdk", 00:05:18.460 "max_sessions": 128, 00:05:18.460 "max_connections_per_session": 2, 00:05:18.460 "max_queue_depth": 64, 00:05:18.460 "default_time2wait": 2, 00:05:18.460 "default_time2retain": 20, 00:05:18.460 "first_burst_length": 8192, 00:05:18.460 "immediate_data": true, 00:05:18.460 "allow_duplicated_isid": false, 00:05:18.460 "error_recovery_level": 0, 00:05:18.460 "nop_timeout": 60, 00:05:18.460 "nop_in_interval": 30, 00:05:18.460 "disable_chap": false, 00:05:18.460 "require_chap": false, 00:05:18.460 "mutual_chap": false, 00:05:18.460 "chap_group": 0, 00:05:18.460 "max_large_datain_per_connection": 64, 00:05:18.460 "max_r2t_per_connection": 4, 00:05:18.460 "pdu_pool_size": 36864, 00:05:18.460 "immediate_data_pool_size": 16384, 00:05:18.460 "data_out_pool_size": 2048 00:05:18.460 } 00:05:18.460 } 00:05:18.460 ] 00:05:18.460 } 00:05:18.460 ] 00:05:18.460 } 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3609534 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3609534 ']' 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3609534 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609534 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609534' 00:05:18.460 killing process with pid 3609534 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3609534 00:05:18.460 09:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3609534 00:05:19.025 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3609674 00:05:19.026 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.026 09:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3609674 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3609674 ']' 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3609674 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609674 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609674' 00:05:24.361 killing process with pid 3609674 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3609674 00:05:24.361 09:39:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3609674 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:24.361 00:05:24.361 real 0m6.513s 00:05:24.361 user 0m6.139s 00:05:24.361 sys 0m0.667s 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.361 ************************************ 00:05:24.361 END TEST skip_rpc_with_json 00:05:24.361 ************************************ 00:05:24.361 09:39:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:24.361 09:39:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.361 09:39:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.361 09:39:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.361 ************************************ 00:05:24.361 START TEST skip_rpc_with_delay 00:05:24.361 ************************************ 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.361 [2024-11-20 09:39:01.212626] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.361 00:05:24.361 real 0m0.076s 00:05:24.361 user 0m0.058s 00:05:24.361 sys 0m0.018s 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.361 09:39:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:24.361 ************************************ 00:05:24.361 END TEST skip_rpc_with_delay 00:05:24.361 ************************************ 00:05:24.361 09:39:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:24.361 09:39:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:24.361 09:39:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:24.361 09:39:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.361 09:39:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.361 09:39:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.619 ************************************ 00:05:24.619 START TEST exit_on_failed_rpc_init 00:05:24.619 ************************************ 00:05:24.619 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:24.619 09:39:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3610469 00:05:24.620 09:39:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.620 09:39:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3610469 00:05:24.620 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3610469 ']' 00:05:24.620 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.620 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.620 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.620 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.620 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.620 [2024-11-20 09:39:01.338504] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:24.620 [2024-11-20 09:39:01.338603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610469 ] 00:05:24.620 [2024-11-20 09:39:01.407282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.620 [2024-11-20 09:39:01.467653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.878 09:39:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.137 [2024-11-20 09:39:01.799092] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:25.137 [2024-11-20 09:39:01.799188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610515 ] 00:05:25.137 [2024-11-20 09:39:01.869515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.137 [2024-11-20 09:39:01.930892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.137 [2024-11-20 09:39:01.930990] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.137 [2024-11-20 09:39:01.931009] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.137 [2024-11-20 09:39:01.931020] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3610469 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3610469 ']' 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3610469 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.137 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3610469 00:05:25.395 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.395 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.395 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3610469' 00:05:25.395 killing process with pid 3610469 00:05:25.395 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3610469 00:05:25.395 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3610469 00:05:25.654 00:05:25.654 real 0m1.195s 00:05:25.654 user 0m1.315s 00:05:25.654 sys 0m0.455s 00:05:25.654 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.654 09:39:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.654 ************************************ 00:05:25.654 END TEST exit_on_failed_rpc_init 00:05:25.654 ************************************ 00:05:25.654 09:39:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.654 00:05:25.654 real 0m13.590s 00:05:25.654 user 0m12.858s 00:05:25.654 sys 0m1.625s 00:05:25.654 09:39:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.654 09:39:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.654 ************************************ 00:05:25.654 END TEST skip_rpc 00:05:25.654 ************************************ 00:05:25.654 09:39:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.654 09:39:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.654 09:39:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.654 09:39:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.654 ************************************ 00:05:25.654 START TEST rpc_client 00:05:25.654 ************************************ 00:05:25.654 09:39:02 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.912 * Looking for test storage... 00:05:25.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:25.912 09:39:02 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.912 09:39:02 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.912 09:39:02 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.912 09:39:02 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.912 09:39:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:25.912 09:39:02 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.912 09:39:02 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.913 --rc genhtml_branch_coverage=1 00:05:25.913 --rc genhtml_function_coverage=1 00:05:25.913 --rc genhtml_legend=1 00:05:25.913 --rc geninfo_all_blocks=1 00:05:25.913 --rc geninfo_unexecuted_blocks=1 00:05:25.913 00:05:25.913 ' 00:05:25.913 09:39:02 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.913 --rc genhtml_branch_coverage=1 00:05:25.913 --rc genhtml_function_coverage=1 00:05:25.913 --rc genhtml_legend=1 00:05:25.913 --rc geninfo_all_blocks=1 00:05:25.913 --rc geninfo_unexecuted_blocks=1 00:05:25.913 00:05:25.913 ' 00:05:25.913 09:39:02 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.913 --rc genhtml_branch_coverage=1 00:05:25.913 --rc genhtml_function_coverage=1 00:05:25.913 --rc genhtml_legend=1 00:05:25.913 --rc geninfo_all_blocks=1 00:05:25.913 --rc geninfo_unexecuted_blocks=1 00:05:25.913 00:05:25.913 ' 00:05:25.913 09:39:02 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.913 --rc genhtml_branch_coverage=1 00:05:25.913 --rc genhtml_function_coverage=1 00:05:25.913 --rc genhtml_legend=1 00:05:25.913 --rc geninfo_all_blocks=1 00:05:25.913 --rc geninfo_unexecuted_blocks=1 00:05:25.913 00:05:25.913 ' 00:05:25.913 09:39:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:25.913 OK 00:05:25.913 09:39:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.913 00:05:25.913 real 0m0.170s 00:05:25.913 user 0m0.106s 00:05:25.913 sys 0m0.073s 00:05:25.913 09:39:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.913 09:39:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:25.913 ************************************ 00:05:25.913 END TEST rpc_client 00:05:25.913 ************************************ 00:05:25.913 09:39:02 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.913 09:39:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.913 09:39:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.913 09:39:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.913 ************************************ 00:05:25.913 START TEST json_config 00:05:25.913 ************************************ 00:05:25.913 09:39:02 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.913 09:39:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.913 09:39:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.913 09:39:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.172 09:39:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.172 09:39:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.172 09:39:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.172 09:39:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.172 09:39:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.172 09:39:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.172 09:39:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.172 09:39:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.172 09:39:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.172 09:39:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.172 09:39:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.172 09:39:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.172 09:39:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:26.172 09:39:02 json_config -- scripts/common.sh@345 -- # : 1 00:05:26.172 09:39:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.172 09:39:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.172 09:39:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:26.172 09:39:02 json_config -- scripts/common.sh@353 -- # local d=1 00:05:26.172 09:39:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.172 09:39:02 json_config -- scripts/common.sh@355 -- # echo 1 00:05:26.172 09:39:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.172 09:39:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:26.172 09:39:02 json_config -- scripts/common.sh@353 -- # local d=2 00:05:26.172 09:39:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.172 09:39:02 json_config -- scripts/common.sh@355 -- # echo 2 00:05:26.172 09:39:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.172 09:39:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.172 09:39:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.172 09:39:02 json_config -- scripts/common.sh@368 -- # return 0 00:05:26.172 09:39:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.172 09:39:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.172 --rc genhtml_branch_coverage=1 00:05:26.172 --rc genhtml_function_coverage=1 00:05:26.172 --rc genhtml_legend=1 00:05:26.172 --rc geninfo_all_blocks=1 00:05:26.172 --rc geninfo_unexecuted_blocks=1 00:05:26.172 00:05:26.172 ' 00:05:26.172 09:39:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.172 --rc genhtml_branch_coverage=1 00:05:26.172 --rc genhtml_function_coverage=1 00:05:26.172 --rc genhtml_legend=1 00:05:26.172 --rc geninfo_all_blocks=1 00:05:26.172 --rc geninfo_unexecuted_blocks=1 00:05:26.172 00:05:26.172 ' 00:05:26.172 09:39:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.172 --rc genhtml_branch_coverage=1 00:05:26.172 --rc genhtml_function_coverage=1 00:05:26.172 --rc genhtml_legend=1 00:05:26.172 --rc geninfo_all_blocks=1 00:05:26.172 --rc geninfo_unexecuted_blocks=1 00:05:26.172 00:05:26.172 ' 00:05:26.172 09:39:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.172 --rc genhtml_branch_coverage=1 00:05:26.172 --rc genhtml_function_coverage=1 00:05:26.172 --rc genhtml_legend=1 00:05:26.172 --rc geninfo_all_blocks=1 00:05:26.172 --rc geninfo_unexecuted_blocks=1 00:05:26.172 00:05:26.172 ' 00:05:26.172 09:39:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.172 09:39:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.172 09:39:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.172 09:39:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.172 09:39:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.172 09:39:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.172 09:39:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.172 09:39:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.172 09:39:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.172 09:39:02 json_config -- paths/export.sh@5 -- # export PATH 00:05:26.173 09:39:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@51 -- # : 0 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.173 09:39:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:26.173 INFO: JSON configuration test init 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.173 09:39:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.173 09:39:02 json_config -- json_config/common.sh@9 -- # local app=target 00:05:26.173 09:39:02 json_config -- json_config/common.sh@10 -- # shift 00:05:26.173 09:39:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.173 09:39:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.173 09:39:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.173 09:39:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.173 09:39:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.173 09:39:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3610880 00:05:26.173 09:39:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.173 09:39:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.173 Waiting for target to run... 00:05:26.173 09:39:02 json_config -- json_config/common.sh@25 -- # waitforlisten 3610880 /var/tmp/spdk_tgt.sock 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 3610880 ']' 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.173 09:39:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.173 [2024-11-20 09:39:02.989846] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:26.173 [2024-11-20 09:39:02.989938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610880 ] 00:05:26.742 [2024-11-20 09:39:03.529039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.742 [2024-11-20 09:39:03.584039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.309 09:39:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.309 09:39:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:27.309 09:39:03 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.309 00:05:27.309 09:39:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:27.309 09:39:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:27.309 09:39:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.309 09:39:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.309 09:39:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:27.309 09:39:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:27.309 09:39:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.309 09:39:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.309 09:39:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.309 09:39:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:27.309 09:39:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:30.595 09:39:07 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:30.595 09:39:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:30.596 09:39:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.596 09:39:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:30.596 09:39:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@54 -- # sort 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:30.596 09:39:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.596 09:39:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:30.596 09:39:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.596 09:39:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:30.596 09:39:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.596 09:39:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.855 MallocForNvmf0 00:05:30.855 09:39:07 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.855 09:39:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.113 MallocForNvmf1 00:05:31.371 09:39:08 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.371 09:39:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:31.629 [2024-11-20 09:39:08.286668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.629 09:39:08 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.629 09:39:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.888 09:39:08 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.888 09:39:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.146 09:39:08 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.146 09:39:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.404 09:39:09 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.404 09:39:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.662 [2024-11-20 09:39:09.358059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.662 09:39:09 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:32.662 09:39:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.662 09:39:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.662 09:39:09 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:32.662 09:39:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.662 09:39:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.662 09:39:09 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:32.662 09:39:09 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.662 09:39:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.920 MallocBdevForConfigChangeCheck 00:05:32.920 09:39:09 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:32.920 09:39:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.920 09:39:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.920 09:39:09 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:32.920 09:39:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.486 09:39:10 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:33.486 INFO: shutting down applications... 00:05:33.486 09:39:10 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:33.486 09:39:10 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:33.486 09:39:10 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:33.486 09:39:10 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:34.860 Calling clear_iscsi_subsystem 00:05:34.860 Calling clear_nvmf_subsystem 00:05:34.860 Calling clear_nbd_subsystem 00:05:34.860 Calling clear_ublk_subsystem 00:05:34.860 Calling clear_vhost_blk_subsystem 00:05:34.860 Calling clear_vhost_scsi_subsystem 00:05:34.860 Calling clear_bdev_subsystem 00:05:35.118 09:39:11 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.118 09:39:11 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:35.118 09:39:11 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:35.118 09:39:11 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.118 09:39:11 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.118 09:39:11 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.376 09:39:12 json_config -- json_config/json_config.sh@352 -- # break 00:05:35.376 09:39:12 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:35.376 09:39:12 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:35.376 09:39:12 json_config -- json_config/common.sh@31 -- # local app=target 00:05:35.376 09:39:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.376 09:39:12 json_config -- json_config/common.sh@35 -- # [[ -n 3610880 ]] 00:05:35.376 09:39:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3610880 00:05:35.376 09:39:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.376 09:39:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.376 09:39:12 json_config -- json_config/common.sh@41 -- # kill -0 3610880 00:05:35.376 09:39:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.944 09:39:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.944 09:39:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.944 09:39:12 json_config -- json_config/common.sh@41 -- # kill -0 3610880 00:05:35.944 09:39:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.944 09:39:12 json_config -- json_config/common.sh@43 -- # break 00:05:35.944 09:39:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.944 09:39:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.944 SPDK target shutdown done 00:05:35.944 09:39:12 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:35.944 INFO: relaunching applications... 00:05:35.944 09:39:12 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.944 09:39:12 json_config -- json_config/common.sh@9 -- # local app=target 00:05:35.944 09:39:12 json_config -- json_config/common.sh@10 -- # shift 00:05:35.944 09:39:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.944 09:39:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.944 09:39:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.944 09:39:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.944 09:39:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.944 09:39:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3612596 00:05:35.944 09:39:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.944 09:39:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.944 Waiting for target to run... 00:05:35.944 09:39:12 json_config -- json_config/common.sh@25 -- # waitforlisten 3612596 /var/tmp/spdk_tgt.sock 00:05:35.944 09:39:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 3612596 ']' 00:05:35.944 09:39:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.944 09:39:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.944 09:39:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.944 09:39:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.944 09:39:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.944 [2024-11-20 09:39:12.750871] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:35.944 [2024-11-20 09:39:12.750952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612596 ] 00:05:36.202 [2024-11-20 09:39:13.110687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.461 [2024-11-20 09:39:13.155253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.739 [2024-11-20 09:39:16.205202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.739 [2024-11-20 09:39:16.237671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:39.739 09:39:16 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.739 09:39:16 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:39.739 09:39:16 json_config -- json_config/common.sh@26 -- # echo '' 00:05:39.739 00:05:39.739 09:39:16 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:39.739 09:39:16 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:39.739 INFO: Checking if target configuration is the same... 00:05:39.739 09:39:16 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.739 09:39:16 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:39.739 09:39:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.739 + '[' 2 -ne 2 ']' 00:05:39.739 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.739 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.739 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.739 +++ basename /dev/fd/62 00:05:39.739 ++ mktemp /tmp/62.XXX 00:05:39.739 + tmp_file_1=/tmp/62.Pml 00:05:39.739 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.739 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.739 + tmp_file_2=/tmp/spdk_tgt_config.json.qKg 00:05:39.739 + ret=0 00:05:39.739 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.997 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.997 + diff -u /tmp/62.Pml /tmp/spdk_tgt_config.json.qKg 00:05:39.997 + echo 'INFO: JSON config files are the same' 00:05:39.997 INFO: JSON config files are the same 00:05:39.997 + rm /tmp/62.Pml /tmp/spdk_tgt_config.json.qKg 00:05:39.997 + exit 0 00:05:39.997 09:39:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:39.997 09:39:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:39.997 INFO: changing configuration and checking if this can be detected... 00:05:39.997 09:39:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.997 09:39:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.255 09:39:16 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.255 09:39:16 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:40.255 09:39:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.255 + '[' 2 -ne 2 ']' 00:05:40.255 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.255 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:40.255 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.255 +++ basename /dev/fd/62 00:05:40.255 ++ mktemp /tmp/62.XXX 00:05:40.255 + tmp_file_1=/tmp/62.wX3 00:05:40.255 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.255 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.255 + tmp_file_2=/tmp/spdk_tgt_config.json.1gG 00:05:40.255 + ret=0 00:05:40.255 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.513 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.772 + diff -u /tmp/62.wX3 /tmp/spdk_tgt_config.json.1gG 00:05:40.772 + ret=1 00:05:40.772 + echo '=== Start of file: /tmp/62.wX3 ===' 00:05:40.772 + cat /tmp/62.wX3 00:05:40.772 + echo '=== End of file: /tmp/62.wX3 ===' 00:05:40.772 + echo '' 00:05:40.772 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1gG ===' 00:05:40.772 + cat /tmp/spdk_tgt_config.json.1gG 00:05:40.772 + echo '=== End of file: /tmp/spdk_tgt_config.json.1gG ===' 00:05:40.772 + echo '' 00:05:40.772 + rm /tmp/62.wX3 /tmp/spdk_tgt_config.json.1gG 00:05:40.772 + exit 1 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:40.772 INFO: configuration change detected. 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@324 -- # [[ -n 3612596 ]] 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.772 09:39:17 json_config -- json_config/json_config.sh@330 -- # killprocess 3612596 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@954 -- # '[' -z 3612596 ']' 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@958 -- # kill -0 3612596 00:05:40.772 09:39:17 json_config -- common/autotest_common.sh@959 -- # uname 00:05:40.773 09:39:17 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.773 09:39:17 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3612596 00:05:40.773 09:39:17 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.773 09:39:17 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.773 09:39:17 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3612596' 00:05:40.773 killing process with pid 3612596 00:05:40.773 09:39:17 json_config -- common/autotest_common.sh@973 -- # kill 3612596 00:05:40.773 09:39:17 json_config -- common/autotest_common.sh@978 -- # wait 3612596 00:05:42.670 09:39:19 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.670 09:39:19 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:42.670 09:39:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.670 09:39:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.670 09:39:19 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:42.670 09:39:19 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:42.670 INFO: Success 00:05:42.670 00:05:42.670 real 0m16.353s 00:05:42.670 user 0m17.939s 00:05:42.670 sys 0m2.718s 00:05:42.670 09:39:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.670 09:39:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.670 ************************************ 00:05:42.670 END TEST json_config 00:05:42.670 ************************************ 00:05:42.670 09:39:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.670 09:39:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.670 09:39:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.670 09:39:19 -- common/autotest_common.sh@10 -- # set +x 00:05:42.670 ************************************ 00:05:42.670 START TEST json_config_extra_key 00:05:42.670 ************************************ 00:05:42.670 09:39:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.670 09:39:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.670 09:39:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.670 09:39:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.670 09:39:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.670 09:39:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:42.670 09:39:19 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.670 09:39:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.671 --rc genhtml_branch_coverage=1 00:05:42.671 --rc genhtml_function_coverage=1 00:05:42.671 --rc genhtml_legend=1 00:05:42.671 --rc geninfo_all_blocks=1 00:05:42.671 --rc geninfo_unexecuted_blocks=1 00:05:42.671 00:05:42.671 ' 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.671 --rc genhtml_branch_coverage=1 00:05:42.671 --rc genhtml_function_coverage=1 00:05:42.671 --rc genhtml_legend=1 00:05:42.671 --rc geninfo_all_blocks=1 00:05:42.671 --rc geninfo_unexecuted_blocks=1 00:05:42.671 00:05:42.671 ' 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.671 --rc genhtml_branch_coverage=1 00:05:42.671 --rc genhtml_function_coverage=1 00:05:42.671 --rc genhtml_legend=1 00:05:42.671 --rc geninfo_all_blocks=1 00:05:42.671 --rc geninfo_unexecuted_blocks=1 00:05:42.671 00:05:42.671 ' 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.671 --rc genhtml_branch_coverage=1 00:05:42.671 --rc genhtml_function_coverage=1 00:05:42.671 --rc genhtml_legend=1 00:05:42.671 --rc geninfo_all_blocks=1 00:05:42.671 --rc geninfo_unexecuted_blocks=1 00:05:42.671 00:05:42.671 ' 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.671 09:39:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.671 09:39:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.671 09:39:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.671 09:39:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.671 09:39:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.671 09:39:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.671 09:39:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.671 09:39:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:42.671 09:39:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.671 09:39:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:42.671 INFO: launching applications... 00:05:42.671 09:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3613519 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:42.671 Waiting for target to run... 00:05:42.671 09:39:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3613519 /var/tmp/spdk_tgt.sock 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3613519 ']' 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:42.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.671 09:39:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.671 [2024-11-20 09:39:19.362807] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:42.671 [2024-11-20 09:39:19.362902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613519 ] 00:05:43.237 [2024-11-20 09:39:19.875680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.237 [2024-11-20 09:39:19.929815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.495 09:39:20 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.495 09:39:20 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:43.495 00:05:43.495 09:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:43.495 INFO: shutting down applications... 00:05:43.495 09:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3613519 ]] 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3613519 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3613519 00:05:43.495 09:39:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.060 09:39:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.060 09:39:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.060 09:39:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3613519 00:05:44.060 09:39:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.060 09:39:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:44.060 09:39:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.061 09:39:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.061 SPDK target shutdown done 00:05:44.061 09:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.061 Success 00:05:44.061 00:05:44.061 real 0m1.665s 00:05:44.061 user 0m1.507s 00:05:44.061 sys 0m0.621s 00:05:44.061 09:39:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.061 09:39:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.061 ************************************ 00:05:44.061 END TEST json_config_extra_key 00:05:44.061 ************************************ 00:05:44.061 09:39:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.061 09:39:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.061 09:39:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.061 09:39:20 -- common/autotest_common.sh@10 -- # set +x 00:05:44.061 ************************************ 00:05:44.061 START TEST alias_rpc 00:05:44.061 ************************************ 00:05:44.061 09:39:20 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.061 * Looking for test storage... 00:05:44.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:44.061 09:39:20 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.061 09:39:20 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.061 09:39:20 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.319 09:39:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.319 --rc genhtml_branch_coverage=1 00:05:44.319 --rc genhtml_function_coverage=1 00:05:44.319 --rc genhtml_legend=1 00:05:44.319 --rc geninfo_all_blocks=1 00:05:44.319 --rc geninfo_unexecuted_blocks=1 00:05:44.319 00:05:44.319 ' 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.319 --rc genhtml_branch_coverage=1 00:05:44.319 --rc genhtml_function_coverage=1 00:05:44.319 --rc genhtml_legend=1 00:05:44.319 --rc geninfo_all_blocks=1 00:05:44.319 --rc geninfo_unexecuted_blocks=1 00:05:44.319 00:05:44.319 ' 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.319 --rc genhtml_branch_coverage=1 00:05:44.319 --rc genhtml_function_coverage=1 00:05:44.319 --rc genhtml_legend=1 00:05:44.319 --rc geninfo_all_blocks=1 00:05:44.319 --rc geninfo_unexecuted_blocks=1 00:05:44.319 00:05:44.319 ' 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.319 --rc genhtml_branch_coverage=1 00:05:44.319 --rc genhtml_function_coverage=1 00:05:44.319 --rc genhtml_legend=1 00:05:44.319 --rc geninfo_all_blocks=1 00:05:44.319 --rc geninfo_unexecuted_blocks=1 00:05:44.319 00:05:44.319 ' 00:05:44.319 09:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.319 09:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3613837 00:05:44.319 09:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3613837 00:05:44.319 09:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3613837 ']' 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.319 09:39:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.319 [2024-11-20 09:39:21.087749] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:44.319 [2024-11-20 09:39:21.087827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613837 ] 00:05:44.319 [2024-11-20 09:39:21.152067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.319 [2024-11-20 09:39:21.209770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.577 09:39:21 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.577 09:39:21 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.577 09:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:45.143 09:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3613837 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3613837 ']' 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3613837 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3613837 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3613837' 00:05:45.143 killing process with pid 3613837 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@973 -- # kill 3613837 00:05:45.143 09:39:21 alias_rpc -- common/autotest_common.sh@978 -- # wait 3613837 00:05:45.401 00:05:45.401 real 0m1.324s 00:05:45.401 user 0m1.448s 00:05:45.401 sys 0m0.428s 00:05:45.401 09:39:22 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.401 09:39:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.401 ************************************ 00:05:45.401 END TEST alias_rpc 00:05:45.401 ************************************ 00:05:45.401 09:39:22 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:45.401 09:39:22 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:45.401 09:39:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.401 09:39:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.401 09:39:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.401 ************************************ 00:05:45.401 START TEST spdkcli_tcp 00:05:45.401 ************************************ 00:05:45.401 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:45.401 * Looking for test storage... 00:05:45.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:45.659 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.659 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.659 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.659 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.659 09:39:22 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:45.659 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.659 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.659 --rc genhtml_branch_coverage=1 00:05:45.659 --rc genhtml_function_coverage=1 00:05:45.659 --rc genhtml_legend=1 00:05:45.659 --rc geninfo_all_blocks=1 00:05:45.659 --rc geninfo_unexecuted_blocks=1 00:05:45.659 00:05:45.659 ' 00:05:45.659 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.659 --rc genhtml_branch_coverage=1 00:05:45.659 --rc genhtml_function_coverage=1 00:05:45.659 --rc genhtml_legend=1 00:05:45.659 --rc geninfo_all_blocks=1 00:05:45.659 --rc geninfo_unexecuted_blocks=1 00:05:45.659 00:05:45.659 ' 00:05:45.659 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.659 --rc genhtml_branch_coverage=1 00:05:45.659 --rc genhtml_function_coverage=1 00:05:45.659 --rc genhtml_legend=1 00:05:45.659 --rc geninfo_all_blocks=1 00:05:45.659 --rc geninfo_unexecuted_blocks=1 00:05:45.659 00:05:45.659 ' 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.660 --rc genhtml_branch_coverage=1 00:05:45.660 --rc genhtml_function_coverage=1 00:05:45.660 --rc genhtml_legend=1 00:05:45.660 --rc geninfo_all_blocks=1 00:05:45.660 --rc geninfo_unexecuted_blocks=1 00:05:45.660 00:05:45.660 ' 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3614030 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:45.660 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3614030 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3614030 ']' 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.660 09:39:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 [2024-11-20 09:39:22.466054] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:45.660 [2024-11-20 09:39:22.466154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614030 ] 00:05:45.660 [2024-11-20 09:39:22.532377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.918 [2024-11-20 09:39:22.591120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.918 [2024-11-20 09:39:22.591124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.176 09:39:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.176 09:39:22 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:46.176 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3614039 00:05:46.176 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:46.176 09:39:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.434 [ 00:05:46.434 "bdev_malloc_delete", 00:05:46.434 "bdev_malloc_create", 00:05:46.434 "bdev_null_resize", 00:05:46.434 "bdev_null_delete", 00:05:46.434 "bdev_null_create", 00:05:46.434 "bdev_nvme_cuse_unregister", 00:05:46.434 "bdev_nvme_cuse_register", 00:05:46.434 "bdev_opal_new_user", 00:05:46.434 "bdev_opal_set_lock_state", 00:05:46.434 "bdev_opal_delete", 00:05:46.434 "bdev_opal_get_info", 00:05:46.434 "bdev_opal_create", 00:05:46.434 "bdev_nvme_opal_revert", 00:05:46.434 "bdev_nvme_opal_init", 00:05:46.434 "bdev_nvme_send_cmd", 00:05:46.434 "bdev_nvme_set_keys", 00:05:46.434 "bdev_nvme_get_path_iostat", 00:05:46.434 "bdev_nvme_get_mdns_discovery_info", 00:05:46.434 "bdev_nvme_stop_mdns_discovery", 00:05:46.434 "bdev_nvme_start_mdns_discovery", 00:05:46.434 "bdev_nvme_set_multipath_policy", 00:05:46.434 "bdev_nvme_set_preferred_path", 00:05:46.434 "bdev_nvme_get_io_paths", 00:05:46.434 "bdev_nvme_remove_error_injection", 00:05:46.434 "bdev_nvme_add_error_injection", 00:05:46.434 "bdev_nvme_get_discovery_info", 00:05:46.434 "bdev_nvme_stop_discovery", 00:05:46.434 "bdev_nvme_start_discovery", 00:05:46.434 "bdev_nvme_get_controller_health_info", 00:05:46.434 "bdev_nvme_disable_controller", 00:05:46.434 "bdev_nvme_enable_controller", 00:05:46.434 "bdev_nvme_reset_controller", 00:05:46.434 "bdev_nvme_get_transport_statistics", 00:05:46.434 "bdev_nvme_apply_firmware", 00:05:46.434 "bdev_nvme_detach_controller", 00:05:46.434 "bdev_nvme_get_controllers", 00:05:46.434 "bdev_nvme_attach_controller", 00:05:46.434 "bdev_nvme_set_hotplug", 00:05:46.434 "bdev_nvme_set_options", 00:05:46.434 "bdev_passthru_delete", 00:05:46.434 "bdev_passthru_create", 00:05:46.434 "bdev_lvol_set_parent_bdev", 00:05:46.434 "bdev_lvol_set_parent", 00:05:46.434 "bdev_lvol_check_shallow_copy", 00:05:46.434 "bdev_lvol_start_shallow_copy", 00:05:46.434 "bdev_lvol_grow_lvstore", 00:05:46.434 "bdev_lvol_get_lvols", 00:05:46.434 "bdev_lvol_get_lvstores", 00:05:46.434 "bdev_lvol_delete", 00:05:46.434 "bdev_lvol_set_read_only", 00:05:46.434 "bdev_lvol_resize", 00:05:46.434 "bdev_lvol_decouple_parent", 00:05:46.434 "bdev_lvol_inflate", 00:05:46.434 "bdev_lvol_rename", 00:05:46.434 "bdev_lvol_clone_bdev", 00:05:46.434 "bdev_lvol_clone", 00:05:46.434 "bdev_lvol_snapshot", 00:05:46.434 "bdev_lvol_create", 00:05:46.434 "bdev_lvol_delete_lvstore", 00:05:46.434 "bdev_lvol_rename_lvstore", 00:05:46.434 "bdev_lvol_create_lvstore", 00:05:46.434 "bdev_raid_set_options", 00:05:46.434 "bdev_raid_remove_base_bdev", 00:05:46.434 "bdev_raid_add_base_bdev", 00:05:46.434 "bdev_raid_delete", 00:05:46.434 "bdev_raid_create", 00:05:46.434 "bdev_raid_get_bdevs", 00:05:46.434 "bdev_error_inject_error", 00:05:46.434 "bdev_error_delete", 00:05:46.434 "bdev_error_create", 00:05:46.434 "bdev_split_delete", 00:05:46.434 "bdev_split_create", 00:05:46.434 "bdev_delay_delete", 00:05:46.434 "bdev_delay_create", 00:05:46.434 "bdev_delay_update_latency", 00:05:46.434 "bdev_zone_block_delete", 00:05:46.434 "bdev_zone_block_create", 00:05:46.434 "blobfs_create", 00:05:46.434 "blobfs_detect", 00:05:46.434 "blobfs_set_cache_size", 00:05:46.434 "bdev_aio_delete", 00:05:46.434 "bdev_aio_rescan", 00:05:46.434 "bdev_aio_create", 00:05:46.434 "bdev_ftl_set_property", 00:05:46.434 "bdev_ftl_get_properties", 00:05:46.434 "bdev_ftl_get_stats", 00:05:46.434 "bdev_ftl_unmap", 00:05:46.434 "bdev_ftl_unload", 00:05:46.434 "bdev_ftl_delete", 00:05:46.434 "bdev_ftl_load", 00:05:46.434 "bdev_ftl_create", 00:05:46.434 "bdev_virtio_attach_controller", 00:05:46.434 "bdev_virtio_scsi_get_devices", 00:05:46.434 "bdev_virtio_detach_controller", 00:05:46.434 "bdev_virtio_blk_set_hotplug", 00:05:46.434 "bdev_iscsi_delete", 00:05:46.434 "bdev_iscsi_create", 00:05:46.434 "bdev_iscsi_set_options", 00:05:46.434 "accel_error_inject_error", 00:05:46.434 "ioat_scan_accel_module", 00:05:46.434 "dsa_scan_accel_module", 00:05:46.434 "iaa_scan_accel_module", 00:05:46.434 "vfu_virtio_create_fs_endpoint", 00:05:46.434 "vfu_virtio_create_scsi_endpoint", 00:05:46.434 "vfu_virtio_scsi_remove_target", 00:05:46.434 "vfu_virtio_scsi_add_target", 00:05:46.434 "vfu_virtio_create_blk_endpoint", 00:05:46.434 "vfu_virtio_delete_endpoint", 00:05:46.434 "keyring_file_remove_key", 00:05:46.434 "keyring_file_add_key", 00:05:46.434 "keyring_linux_set_options", 00:05:46.434 "fsdev_aio_delete", 00:05:46.434 "fsdev_aio_create", 00:05:46.434 "iscsi_get_histogram", 00:05:46.434 "iscsi_enable_histogram", 00:05:46.434 "iscsi_set_options", 00:05:46.434 "iscsi_get_auth_groups", 00:05:46.434 "iscsi_auth_group_remove_secret", 00:05:46.434 "iscsi_auth_group_add_secret", 00:05:46.434 "iscsi_delete_auth_group", 00:05:46.434 "iscsi_create_auth_group", 00:05:46.434 "iscsi_set_discovery_auth", 00:05:46.434 "iscsi_get_options", 00:05:46.434 "iscsi_target_node_request_logout", 00:05:46.434 "iscsi_target_node_set_redirect", 00:05:46.434 "iscsi_target_node_set_auth", 00:05:46.434 "iscsi_target_node_add_lun", 00:05:46.434 "iscsi_get_stats", 00:05:46.434 "iscsi_get_connections", 00:05:46.434 "iscsi_portal_group_set_auth", 00:05:46.434 "iscsi_start_portal_group", 00:05:46.434 "iscsi_delete_portal_group", 00:05:46.434 "iscsi_create_portal_group", 00:05:46.434 "iscsi_get_portal_groups", 00:05:46.434 "iscsi_delete_target_node", 00:05:46.434 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.434 "iscsi_target_node_add_pg_ig_maps", 00:05:46.434 "iscsi_create_target_node", 00:05:46.434 "iscsi_get_target_nodes", 00:05:46.434 "iscsi_delete_initiator_group", 00:05:46.434 "iscsi_initiator_group_remove_initiators", 00:05:46.434 "iscsi_initiator_group_add_initiators", 00:05:46.434 "iscsi_create_initiator_group", 00:05:46.434 "iscsi_get_initiator_groups", 00:05:46.434 "nvmf_set_crdt", 00:05:46.434 "nvmf_set_config", 00:05:46.434 "nvmf_set_max_subsystems", 00:05:46.434 "nvmf_stop_mdns_prr", 00:05:46.434 "nvmf_publish_mdns_prr", 00:05:46.434 "nvmf_subsystem_get_listeners", 00:05:46.434 "nvmf_subsystem_get_qpairs", 00:05:46.434 "nvmf_subsystem_get_controllers", 00:05:46.434 "nvmf_get_stats", 00:05:46.434 "nvmf_get_transports", 00:05:46.434 "nvmf_create_transport", 00:05:46.434 "nvmf_get_targets", 00:05:46.434 "nvmf_delete_target", 00:05:46.434 "nvmf_create_target", 00:05:46.434 "nvmf_subsystem_allow_any_host", 00:05:46.434 "nvmf_subsystem_set_keys", 00:05:46.434 "nvmf_subsystem_remove_host", 00:05:46.434 "nvmf_subsystem_add_host", 00:05:46.434 "nvmf_ns_remove_host", 00:05:46.434 "nvmf_ns_add_host", 00:05:46.434 "nvmf_subsystem_remove_ns", 00:05:46.434 "nvmf_subsystem_set_ns_ana_group", 00:05:46.435 "nvmf_subsystem_add_ns", 00:05:46.435 "nvmf_subsystem_listener_set_ana_state", 00:05:46.435 "nvmf_discovery_get_referrals", 00:05:46.435 "nvmf_discovery_remove_referral", 00:05:46.435 "nvmf_discovery_add_referral", 00:05:46.435 "nvmf_subsystem_remove_listener", 00:05:46.435 "nvmf_subsystem_add_listener", 00:05:46.435 "nvmf_delete_subsystem", 00:05:46.435 "nvmf_create_subsystem", 00:05:46.435 "nvmf_get_subsystems", 00:05:46.435 "env_dpdk_get_mem_stats", 00:05:46.435 "nbd_get_disks", 00:05:46.435 "nbd_stop_disk", 00:05:46.435 "nbd_start_disk", 00:05:46.435 "ublk_recover_disk", 00:05:46.435 "ublk_get_disks", 00:05:46.435 "ublk_stop_disk", 00:05:46.435 "ublk_start_disk", 00:05:46.435 "ublk_destroy_target", 00:05:46.435 "ublk_create_target", 00:05:46.435 "virtio_blk_create_transport", 00:05:46.435 "virtio_blk_get_transports", 00:05:46.435 "vhost_controller_set_coalescing", 00:05:46.435 "vhost_get_controllers", 00:05:46.435 "vhost_delete_controller", 00:05:46.435 "vhost_create_blk_controller", 00:05:46.435 "vhost_scsi_controller_remove_target", 00:05:46.435 "vhost_scsi_controller_add_target", 00:05:46.435 "vhost_start_scsi_controller", 00:05:46.435 "vhost_create_scsi_controller", 00:05:46.435 "thread_set_cpumask", 00:05:46.435 "scheduler_set_options", 00:05:46.435 "framework_get_governor", 00:05:46.435 "framework_get_scheduler", 00:05:46.435 "framework_set_scheduler", 00:05:46.435 "framework_get_reactors", 00:05:46.435 "thread_get_io_channels", 00:05:46.435 "thread_get_pollers", 00:05:46.435 "thread_get_stats", 00:05:46.435 "framework_monitor_context_switch", 00:05:46.435 "spdk_kill_instance", 00:05:46.435 "log_enable_timestamps", 00:05:46.435 "log_get_flags", 00:05:46.435 "log_clear_flag", 00:05:46.435 "log_set_flag", 00:05:46.435 "log_get_level", 00:05:46.435 "log_set_level", 00:05:46.435 "log_get_print_level", 00:05:46.435 "log_set_print_level", 00:05:46.435 "framework_enable_cpumask_locks", 00:05:46.435 "framework_disable_cpumask_locks", 00:05:46.435 "framework_wait_init", 00:05:46.435 "framework_start_init", 00:05:46.435 "scsi_get_devices", 00:05:46.435 "bdev_get_histogram", 00:05:46.435 "bdev_enable_histogram", 00:05:46.435 "bdev_set_qos_limit", 00:05:46.435 "bdev_set_qd_sampling_period", 00:05:46.435 "bdev_get_bdevs", 00:05:46.435 "bdev_reset_iostat", 00:05:46.435 "bdev_get_iostat", 00:05:46.435 "bdev_examine", 00:05:46.435 "bdev_wait_for_examine", 00:05:46.435 "bdev_set_options", 00:05:46.435 "accel_get_stats", 00:05:46.435 "accel_set_options", 00:05:46.435 "accel_set_driver", 00:05:46.435 "accel_crypto_key_destroy", 00:05:46.435 "accel_crypto_keys_get", 00:05:46.435 "accel_crypto_key_create", 00:05:46.435 "accel_assign_opc", 00:05:46.435 "accel_get_module_info", 00:05:46.435 "accel_get_opc_assignments", 00:05:46.435 "vmd_rescan", 00:05:46.435 "vmd_remove_device", 00:05:46.435 "vmd_enable", 00:05:46.435 "sock_get_default_impl", 00:05:46.435 "sock_set_default_impl", 00:05:46.435 "sock_impl_set_options", 00:05:46.435 "sock_impl_get_options", 00:05:46.435 "iobuf_get_stats", 00:05:46.435 "iobuf_set_options", 00:05:46.435 "keyring_get_keys", 00:05:46.435 "vfu_tgt_set_base_path", 00:05:46.435 "framework_get_pci_devices", 00:05:46.435 "framework_get_config", 00:05:46.435 "framework_get_subsystems", 00:05:46.435 "fsdev_set_opts", 00:05:46.435 "fsdev_get_opts", 00:05:46.435 "trace_get_info", 00:05:46.435 "trace_get_tpoint_group_mask", 00:05:46.435 "trace_disable_tpoint_group", 00:05:46.435 "trace_enable_tpoint_group", 00:05:46.435 "trace_clear_tpoint_mask", 00:05:46.435 "trace_set_tpoint_mask", 00:05:46.435 "notify_get_notifications", 00:05:46.435 "notify_get_types", 00:05:46.435 "spdk_get_version", 00:05:46.435 "rpc_get_methods" 00:05:46.435 ] 00:05:46.435 09:39:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.435 09:39:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.435 09:39:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3614030 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3614030 ']' 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3614030 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3614030 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3614030' 00:05:46.435 killing process with pid 3614030 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3614030 00:05:46.435 09:39:23 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3614030 00:05:47.002 00:05:47.002 real 0m1.366s 00:05:47.002 user 0m2.442s 00:05:47.002 sys 0m0.463s 00:05:47.002 09:39:23 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.002 09:39:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.002 ************************************ 00:05:47.002 END TEST spdkcli_tcp 00:05:47.002 ************************************ 00:05:47.002 09:39:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.002 09:39:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.002 09:39:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.002 09:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:47.002 ************************************ 00:05:47.002 START TEST dpdk_mem_utility 00:05:47.002 ************************************ 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.002 * Looking for test storage... 00:05:47.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.002 09:39:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.002 --rc genhtml_branch_coverage=1 00:05:47.002 --rc genhtml_function_coverage=1 00:05:47.002 --rc genhtml_legend=1 00:05:47.002 --rc geninfo_all_blocks=1 00:05:47.002 --rc geninfo_unexecuted_blocks=1 00:05:47.002 00:05:47.002 ' 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.002 --rc genhtml_branch_coverage=1 00:05:47.002 --rc genhtml_function_coverage=1 00:05:47.002 --rc genhtml_legend=1 00:05:47.002 --rc geninfo_all_blocks=1 00:05:47.002 --rc geninfo_unexecuted_blocks=1 00:05:47.002 00:05:47.002 ' 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.002 --rc genhtml_branch_coverage=1 00:05:47.002 --rc genhtml_function_coverage=1 00:05:47.002 --rc genhtml_legend=1 00:05:47.002 --rc geninfo_all_blocks=1 00:05:47.002 --rc geninfo_unexecuted_blocks=1 00:05:47.002 00:05:47.002 ' 00:05:47.002 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.002 --rc genhtml_branch_coverage=1 00:05:47.002 --rc genhtml_function_coverage=1 00:05:47.002 --rc genhtml_legend=1 00:05:47.002 --rc geninfo_all_blocks=1 00:05:47.002 --rc geninfo_unexecuted_blocks=1 00:05:47.002 00:05:47.002 ' 00:05:47.002 09:39:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.002 09:39:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3614244 00:05:47.003 09:39:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.003 09:39:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3614244 00:05:47.003 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3614244 ']' 00:05:47.003 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.003 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.003 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.003 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.003 09:39:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.003 [2024-11-20 09:39:23.883811] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:47.003 [2024-11-20 09:39:23.883890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614244 ] 00:05:47.261 [2024-11-20 09:39:23.950398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.261 [2024-11-20 09:39:24.009544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.524 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.524 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:47.524 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.524 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.524 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.524 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.524 { 00:05:47.524 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.524 } 00:05:47.524 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.524 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.524 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:47.524 1 heaps totaling size 810.000000 MiB 00:05:47.524 size: 810.000000 MiB heap id: 0 00:05:47.524 end heaps---------- 00:05:47.524 9 mempools totaling size 595.772034 MiB 00:05:47.524 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.524 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.524 size: 92.545471 MiB name: bdev_io_3614244 00:05:47.524 size: 50.003479 MiB name: msgpool_3614244 00:05:47.524 size: 36.509338 MiB name: fsdev_io_3614244 00:05:47.524 size: 21.763794 MiB name: PDU_Pool 00:05:47.524 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.524 size: 4.133484 MiB name: evtpool_3614244 00:05:47.524 size: 0.026123 MiB name: Session_Pool 00:05:47.524 end mempools------- 00:05:47.524 6 memzones totaling size 4.142822 MiB 00:05:47.524 size: 1.000366 MiB name: RG_ring_0_3614244 00:05:47.524 size: 1.000366 MiB name: RG_ring_1_3614244 00:05:47.524 size: 1.000366 MiB name: RG_ring_4_3614244 00:05:47.524 size: 1.000366 MiB name: RG_ring_5_3614244 00:05:47.524 size: 0.125366 MiB name: RG_ring_2_3614244 00:05:47.524 size: 0.015991 MiB name: RG_ring_3_3614244 00:05:47.524 end memzones------- 00:05:47.524 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.524 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:47.524 list of free elements. size: 10.862488 MiB 00:05:47.524 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:47.524 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:47.524 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:47.524 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:47.524 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:47.524 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:47.524 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:47.524 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:47.524 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:47.524 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:47.524 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:47.524 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:47.524 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:47.524 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:47.524 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:47.524 list of standard malloc elements. size: 199.218628 MiB 00:05:47.524 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:47.524 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:47.524 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:47.524 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:47.524 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:47.524 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.524 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:47.524 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.524 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:47.524 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.524 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.524 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:47.524 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:47.524 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:47.524 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:47.524 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:47.524 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:47.524 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:47.525 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:47.525 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:47.525 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:47.525 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:47.525 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:47.525 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:47.525 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:47.525 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:47.525 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:47.525 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:47.525 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:47.525 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:47.525 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:47.525 list of memzone associated elements. size: 599.918884 MiB 00:05:47.525 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:47.525 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.525 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:47.525 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.525 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:47.525 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3614244_0 00:05:47.525 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:47.525 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3614244_0 00:05:47.525 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:47.525 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3614244_0 00:05:47.525 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:47.525 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.525 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:47.525 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.525 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:47.525 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3614244_0 00:05:47.525 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:47.525 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3614244 00:05:47.525 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.525 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3614244 00:05:47.525 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:47.525 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.525 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:47.525 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.525 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:47.525 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.525 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:47.525 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.525 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:47.525 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3614244 00:05:47.525 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:47.525 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3614244 00:05:47.525 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:47.525 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3614244 00:05:47.525 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:47.525 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3614244 00:05:47.525 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:47.525 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3614244 00:05:47.525 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:47.525 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3614244 00:05:47.525 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:47.525 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.525 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:47.525 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.525 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:47.525 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.525 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:47.525 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3614244 00:05:47.525 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:47.525 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3614244 00:05:47.525 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:47.525 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.525 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:47.525 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.525 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:47.525 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3614244 00:05:47.525 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:47.525 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.525 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:47.525 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3614244 00:05:47.525 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:47.525 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3614244 00:05:47.525 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:47.525 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3614244 00:05:47.525 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:47.525 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.525 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.525 09:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3614244 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3614244 ']' 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3614244 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3614244 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3614244' 00:05:47.525 killing process with pid 3614244 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3614244 00:05:47.525 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3614244 00:05:48.092 00:05:48.092 real 0m1.157s 00:05:48.092 user 0m1.138s 00:05:48.092 sys 0m0.426s 00:05:48.092 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.092 09:39:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.092 ************************************ 00:05:48.092 END TEST dpdk_mem_utility 00:05:48.092 ************************************ 00:05:48.092 09:39:24 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:48.092 09:39:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.092 09:39:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.092 09:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:48.092 ************************************ 00:05:48.092 START TEST event 00:05:48.092 ************************************ 00:05:48.092 09:39:24 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:48.092 * Looking for test storage... 00:05:48.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:48.092 09:39:24 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.092 09:39:24 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.092 09:39:24 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.350 09:39:25 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.350 09:39:25 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.350 09:39:25 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.350 09:39:25 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.350 09:39:25 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.350 09:39:25 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.350 09:39:25 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.350 09:39:25 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.350 09:39:25 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.350 09:39:25 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.350 09:39:25 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.350 09:39:25 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.350 09:39:25 event -- scripts/common.sh@344 -- # case "$op" in 00:05:48.350 09:39:25 event -- scripts/common.sh@345 -- # : 1 00:05:48.350 09:39:25 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.350 09:39:25 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.350 09:39:25 event -- scripts/common.sh@365 -- # decimal 1 00:05:48.350 09:39:25 event -- scripts/common.sh@353 -- # local d=1 00:05:48.350 09:39:25 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.350 09:39:25 event -- scripts/common.sh@355 -- # echo 1 00:05:48.350 09:39:25 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.350 09:39:25 event -- scripts/common.sh@366 -- # decimal 2 00:05:48.350 09:39:25 event -- scripts/common.sh@353 -- # local d=2 00:05:48.350 09:39:25 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.350 09:39:25 event -- scripts/common.sh@355 -- # echo 2 00:05:48.350 09:39:25 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.350 09:39:25 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.350 09:39:25 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.350 09:39:25 event -- scripts/common.sh@368 -- # return 0 00:05:48.350 09:39:25 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.350 09:39:25 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.350 --rc genhtml_branch_coverage=1 00:05:48.350 --rc genhtml_function_coverage=1 00:05:48.350 --rc genhtml_legend=1 00:05:48.350 --rc geninfo_all_blocks=1 00:05:48.350 --rc geninfo_unexecuted_blocks=1 00:05:48.350 00:05:48.350 ' 00:05:48.350 09:39:25 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.351 --rc genhtml_branch_coverage=1 00:05:48.351 --rc genhtml_function_coverage=1 00:05:48.351 --rc genhtml_legend=1 00:05:48.351 --rc geninfo_all_blocks=1 00:05:48.351 --rc geninfo_unexecuted_blocks=1 00:05:48.351 00:05:48.351 ' 00:05:48.351 09:39:25 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.351 --rc genhtml_branch_coverage=1 00:05:48.351 --rc genhtml_function_coverage=1 00:05:48.351 --rc genhtml_legend=1 00:05:48.351 --rc geninfo_all_blocks=1 00:05:48.351 --rc geninfo_unexecuted_blocks=1 00:05:48.351 00:05:48.351 ' 00:05:48.351 09:39:25 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.351 --rc genhtml_branch_coverage=1 00:05:48.351 --rc genhtml_function_coverage=1 00:05:48.351 --rc genhtml_legend=1 00:05:48.351 --rc geninfo_all_blocks=1 00:05:48.351 --rc geninfo_unexecuted_blocks=1 00:05:48.351 00:05:48.351 ' 00:05:48.351 09:39:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:48.351 09:39:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.351 09:39:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.351 09:39:25 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:48.351 09:39:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.351 09:39:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.351 ************************************ 00:05:48.351 START TEST event_perf 00:05:48.351 ************************************ 00:05:48.351 09:39:25 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.351 Running I/O for 1 seconds...[2024-11-20 09:39:25.066434] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:48.351 [2024-11-20 09:39:25.066508] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614447 ] 00:05:48.351 [2024-11-20 09:39:25.131005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.351 [2024-11-20 09:39:25.191245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.351 [2024-11-20 09:39:25.191325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.351 [2024-11-20 09:39:25.191375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.351 [2024-11-20 09:39:25.191378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.722 Running I/O for 1 seconds... 00:05:49.722 lcore 0: 229960 00:05:49.722 lcore 1: 229960 00:05:49.722 lcore 2: 229960 00:05:49.722 lcore 3: 229959 00:05:49.722 done. 00:05:49.722 00:05:49.722 real 0m1.201s 00:05:49.722 user 0m4.130s 00:05:49.722 sys 0m0.066s 00:05:49.722 09:39:26 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.722 09:39:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.722 ************************************ 00:05:49.722 END TEST event_perf 00:05:49.722 ************************************ 00:05:49.722 09:39:26 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.722 09:39:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:49.722 09:39:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.722 09:39:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.722 ************************************ 00:05:49.722 START TEST event_reactor 00:05:49.722 ************************************ 00:05:49.722 09:39:26 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.722 [2024-11-20 09:39:26.322706] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:49.722 [2024-11-20 09:39:26.322771] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614602 ] 00:05:49.722 [2024-11-20 09:39:26.390066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.722 [2024-11-20 09:39:26.444099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.657 test_start 00:05:50.657 oneshot 00:05:50.657 tick 100 00:05:50.657 tick 100 00:05:50.657 tick 250 00:05:50.657 tick 100 00:05:50.657 tick 100 00:05:50.657 tick 250 00:05:50.657 tick 100 00:05:50.657 tick 500 00:05:50.657 tick 100 00:05:50.657 tick 100 00:05:50.657 tick 250 00:05:50.657 tick 100 00:05:50.657 tick 100 00:05:50.657 test_end 00:05:50.657 00:05:50.657 real 0m1.198s 00:05:50.657 user 0m1.124s 00:05:50.657 sys 0m0.071s 00:05:50.657 09:39:27 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.657 09:39:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:50.657 ************************************ 00:05:50.657 END TEST event_reactor 00:05:50.657 ************************************ 00:05:50.657 09:39:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.657 09:39:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:50.657 09:39:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.657 09:39:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.657 ************************************ 00:05:50.657 START TEST event_reactor_perf 00:05:50.657 ************************************ 00:05:50.657 09:39:27 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.915 [2024-11-20 09:39:27.571276] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:50.915 [2024-11-20 09:39:27.571354] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614756 ] 00:05:50.915 [2024-11-20 09:39:27.638294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.915 [2024-11-20 09:39:27.695029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.850 test_start 00:05:51.850 test_end 00:05:51.850 Performance: 453144 events per second 00:05:52.108 00:05:52.108 real 0m1.205s 00:05:52.108 user 0m1.134s 00:05:52.108 sys 0m0.066s 00:05:52.108 09:39:28 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.108 09:39:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.108 ************************************ 00:05:52.108 END TEST event_reactor_perf 00:05:52.108 ************************************ 00:05:52.108 09:39:28 event -- event/event.sh@49 -- # uname -s 00:05:52.108 09:39:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:52.108 09:39:28 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:52.108 09:39:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.108 09:39:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.108 09:39:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.108 ************************************ 00:05:52.108 START TEST event_scheduler 00:05:52.108 ************************************ 00:05:52.108 09:39:28 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:52.108 * Looking for test storage... 00:05:52.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:52.108 09:39:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.108 09:39:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.108 09:39:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.108 09:39:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.108 09:39:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.109 09:39:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.109 --rc genhtml_branch_coverage=1 00:05:52.109 --rc genhtml_function_coverage=1 00:05:52.109 --rc genhtml_legend=1 00:05:52.109 --rc geninfo_all_blocks=1 00:05:52.109 --rc geninfo_unexecuted_blocks=1 00:05:52.109 00:05:52.109 ' 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.109 --rc genhtml_branch_coverage=1 00:05:52.109 --rc genhtml_function_coverage=1 00:05:52.109 --rc genhtml_legend=1 00:05:52.109 --rc geninfo_all_blocks=1 00:05:52.109 --rc geninfo_unexecuted_blocks=1 00:05:52.109 00:05:52.109 ' 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.109 --rc genhtml_branch_coverage=1 00:05:52.109 --rc genhtml_function_coverage=1 00:05:52.109 --rc genhtml_legend=1 00:05:52.109 --rc geninfo_all_blocks=1 00:05:52.109 --rc geninfo_unexecuted_blocks=1 00:05:52.109 00:05:52.109 ' 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.109 --rc genhtml_branch_coverage=1 00:05:52.109 --rc genhtml_function_coverage=1 00:05:52.109 --rc genhtml_legend=1 00:05:52.109 --rc geninfo_all_blocks=1 00:05:52.109 --rc geninfo_unexecuted_blocks=1 00:05:52.109 00:05:52.109 ' 00:05:52.109 09:39:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:52.109 09:39:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3615061 00:05:52.109 09:39:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:52.109 09:39:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.109 09:39:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3615061 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3615061 ']' 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.109 09:39:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.109 [2024-11-20 09:39:28.999827] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:52.109 [2024-11-20 09:39:28.999924] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615061 ] 00:05:52.366 [2024-11-20 09:39:29.075225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.366 [2024-11-20 09:39:29.142363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.366 [2024-11-20 09:39:29.142386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.366 [2024-11-20 09:39:29.142439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.366 [2024-11-20 09:39:29.142443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.366 09:39:29 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.366 09:39:29 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:52.366 09:39:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.366 09:39:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.366 09:39:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.366 [2024-11-20 09:39:29.239387] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:52.366 [2024-11-20 09:39:29.239415] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:52.366 [2024-11-20 09:39:29.239433] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.366 [2024-11-20 09:39:29.239445] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.366 [2024-11-20 09:39:29.239455] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.366 09:39:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.366 09:39:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.366 09:39:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.366 09:39:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.623 [2024-11-20 09:39:29.342351] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:52.623 09:39:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.623 09:39:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:52.623 09:39:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.623 09:39:29 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.623 09:39:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.623 ************************************ 00:05:52.623 START TEST scheduler_create_thread 00:05:52.623 ************************************ 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.623 2 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.623 3 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.623 4 00:05:52.623 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 5 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 6 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 7 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 8 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 9 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 10 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.624 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.188 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.188 00:05:53.188 real 0m0.593s 00:05:53.188 user 0m0.012s 00:05:53.188 sys 0m0.002s 00:05:53.188 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.188 09:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.188 ************************************ 00:05:53.188 END TEST scheduler_create_thread 00:05:53.188 ************************************ 00:05:53.188 09:39:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.188 09:39:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3615061 00:05:53.188 09:39:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3615061 ']' 00:05:53.188 09:39:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3615061 00:05:53.188 09:39:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:53.188 09:39:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.188 09:39:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3615061 00:05:53.188 09:39:30 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.188 09:39:30 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.188 09:39:30 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3615061' 00:05:53.188 killing process with pid 3615061 00:05:53.188 09:39:30 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3615061 00:05:53.188 09:39:30 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3615061 00:05:53.752 [2024-11-20 09:39:30.446533] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.010 00:05:54.010 real 0m1.854s 00:05:54.010 user 0m2.496s 00:05:54.010 sys 0m0.361s 00:05:54.010 09:39:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.010 09:39:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.010 ************************************ 00:05:54.010 END TEST event_scheduler 00:05:54.010 ************************************ 00:05:54.010 09:39:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.010 09:39:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.010 09:39:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.010 09:39:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.010 09:39:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.010 ************************************ 00:05:54.010 START TEST app_repeat 00:05:54.010 ************************************ 00:05:54.010 09:39:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3615258 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3615258' 00:05:54.010 Process app_repeat pid: 3615258 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.010 spdk_app_start Round 0 00:05:54.010 09:39:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3615258 /var/tmp/spdk-nbd.sock 00:05:54.010 09:39:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3615258 ']' 00:05:54.010 09:39:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.010 09:39:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.010 09:39:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.010 09:39:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.010 09:39:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.010 [2024-11-20 09:39:30.751925] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:05:54.010 [2024-11-20 09:39:30.751990] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615258 ] 00:05:54.011 [2024-11-20 09:39:30.816041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.011 [2024-11-20 09:39:30.871401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.011 [2024-11-20 09:39:30.871406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.296 09:39:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.296 09:39:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.296 09:39:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.579 Malloc0 00:05:54.579 09:39:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.841 Malloc1 00:05:54.841 09:39:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.841 09:39:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.100 /dev/nbd0 00:05:55.100 09:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.100 09:39:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.100 1+0 records in 00:05:55.100 1+0 records out 00:05:55.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239465 s, 17.1 MB/s 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.100 09:39:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.100 09:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.100 09:39:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.100 09:39:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.359 /dev/nbd1 00:05:55.359 09:39:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.359 09:39:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.359 1+0 records in 00:05:55.359 1+0 records out 00:05:55.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168653 s, 24.3 MB/s 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.359 09:39:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.359 09:39:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.359 09:39:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.359 09:39:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.359 09:39:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.359 09:39:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.925 { 00:05:55.925 "nbd_device": "/dev/nbd0", 00:05:55.925 "bdev_name": "Malloc0" 00:05:55.925 }, 00:05:55.925 { 00:05:55.925 "nbd_device": "/dev/nbd1", 00:05:55.925 "bdev_name": "Malloc1" 00:05:55.925 } 00:05:55.925 ]' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.925 { 00:05:55.925 "nbd_device": "/dev/nbd0", 00:05:55.925 "bdev_name": "Malloc0" 00:05:55.925 }, 00:05:55.925 { 00:05:55.925 "nbd_device": "/dev/nbd1", 00:05:55.925 "bdev_name": "Malloc1" 00:05:55.925 } 00:05:55.925 ]' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.925 /dev/nbd1' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.925 /dev/nbd1' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.925 256+0 records in 00:05:55.925 256+0 records out 00:05:55.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499594 s, 210 MB/s 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.925 256+0 records in 00:05:55.925 256+0 records out 00:05:55.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196559 s, 53.3 MB/s 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.925 256+0 records in 00:05:55.925 256+0 records out 00:05:55.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021782 s, 48.1 MB/s 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.925 09:39:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.184 09:39:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.443 09:39:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.701 09:39:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.701 09:39:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.959 09:39:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.217 [2024-11-20 09:39:34.053858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.217 [2024-11-20 09:39:34.112131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.217 [2024-11-20 09:39:34.112131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.475 [2024-11-20 09:39:34.173005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.475 [2024-11-20 09:39:34.173072] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.003 09:39:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.003 09:39:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.003 spdk_app_start Round 1 00:06:00.003 09:39:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3615258 /var/tmp/spdk-nbd.sock 00:06:00.003 09:39:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3615258 ']' 00:06:00.003 09:39:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.003 09:39:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.003 09:39:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.003 09:39:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.003 09:39:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.262 09:39:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.262 09:39:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.262 09:39:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.519 Malloc0 00:06:00.519 09:39:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.777 Malloc1 00:06:00.777 09:39:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.777 09:39:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.342 /dev/nbd0 00:06:01.342 09:39:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.342 09:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.342 1+0 records in 00:06:01.342 1+0 records out 00:06:01.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211038 s, 19.4 MB/s 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.342 09:39:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.342 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.342 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.343 09:39:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.600 /dev/nbd1 00:06:01.600 09:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.600 09:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.600 1+0 records in 00:06:01.600 1+0 records out 00:06:01.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00586664 s, 698 kB/s 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.600 09:39:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.600 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.601 09:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.601 09:39:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.601 09:39:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.601 09:39:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.859 { 00:06:01.859 "nbd_device": "/dev/nbd0", 00:06:01.859 "bdev_name": "Malloc0" 00:06:01.859 }, 00:06:01.859 { 00:06:01.859 "nbd_device": "/dev/nbd1", 00:06:01.859 "bdev_name": "Malloc1" 00:06:01.859 } 00:06:01.859 ]' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.859 { 00:06:01.859 "nbd_device": "/dev/nbd0", 00:06:01.859 "bdev_name": "Malloc0" 00:06:01.859 }, 00:06:01.859 { 00:06:01.859 "nbd_device": "/dev/nbd1", 00:06:01.859 "bdev_name": "Malloc1" 00:06:01.859 } 00:06:01.859 ]' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.859 /dev/nbd1' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.859 /dev/nbd1' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.859 256+0 records in 00:06:01.859 256+0 records out 00:06:01.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503377 s, 208 MB/s 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.859 256+0 records in 00:06:01.859 256+0 records out 00:06:01.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201565 s, 52.0 MB/s 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.859 256+0 records in 00:06:01.859 256+0 records out 00:06:01.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220817 s, 47.5 MB/s 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.859 09:39:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.118 09:39:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.683 09:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.940 09:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.940 09:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.940 09:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.940 09:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.940 09:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.940 09:39:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.940 09:39:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.940 09:39:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.941 09:39:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.941 09:39:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.198 09:39:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.456 [2024-11-20 09:39:40.118763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.456 [2024-11-20 09:39:40.178087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.456 [2024-11-20 09:39:40.178087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.456 [2024-11-20 09:39:40.238043] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.457 [2024-11-20 09:39:40.238130] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.733 09:39:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.733 09:39:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:06.733 spdk_app_start Round 2 00:06:06.733 09:39:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3615258 /var/tmp/spdk-nbd.sock 00:06:06.733 09:39:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3615258 ']' 00:06:06.733 09:39:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.733 09:39:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.733 09:39:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.733 09:39:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.733 09:39:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.733 09:39:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.733 09:39:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.733 09:39:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.733 Malloc0 00:06:06.733 09:39:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.990 Malloc1 00:06:06.991 09:39:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.991 09:39:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.248 /dev/nbd0 00:06:07.248 09:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.248 09:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.248 1+0 records in 00:06:07.248 1+0 records out 00:06:07.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208969 s, 19.6 MB/s 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.248 09:39:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.248 09:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.248 09:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.248 09:39:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.506 /dev/nbd1 00:06:07.506 09:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.506 09:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.506 1+0 records in 00:06:07.506 1+0 records out 00:06:07.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295706 s, 13.9 MB/s 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.506 09:39:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.506 09:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.506 09:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.506 09:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.506 09:39:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.507 09:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.072 { 00:06:08.072 "nbd_device": "/dev/nbd0", 00:06:08.072 "bdev_name": "Malloc0" 00:06:08.072 }, 00:06:08.072 { 00:06:08.072 "nbd_device": "/dev/nbd1", 00:06:08.072 "bdev_name": "Malloc1" 00:06:08.072 } 00:06:08.072 ]' 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.072 { 00:06:08.072 "nbd_device": "/dev/nbd0", 00:06:08.072 "bdev_name": "Malloc0" 00:06:08.072 }, 00:06:08.072 { 00:06:08.072 "nbd_device": "/dev/nbd1", 00:06:08.072 "bdev_name": "Malloc1" 00:06:08.072 } 00:06:08.072 ]' 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.072 /dev/nbd1' 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.072 /dev/nbd1' 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.072 256+0 records in 00:06:08.072 256+0 records out 00:06:08.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477691 s, 220 MB/s 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.072 256+0 records in 00:06:08.072 256+0 records out 00:06:08.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198896 s, 52.7 MB/s 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.072 256+0 records in 00:06:08.072 256+0 records out 00:06:08.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222993 s, 47.0 MB/s 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.072 09:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.073 09:39:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.330 09:39:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.588 09:39:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.589 09:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.846 09:39:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.846 09:39:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.104 09:39:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.362 [2024-11-20 09:39:46.196872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.362 [2024-11-20 09:39:46.254170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.362 [2024-11-20 09:39:46.254173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.620 [2024-11-20 09:39:46.315421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.620 [2024-11-20 09:39:46.315489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.149 09:39:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3615258 /var/tmp/spdk-nbd.sock 00:06:12.149 09:39:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3615258 ']' 00:06:12.149 09:39:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.149 09:39:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.149 09:39:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.149 09:39:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.149 09:39:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.406 09:39:49 event.app_repeat -- event/event.sh@39 -- # killprocess 3615258 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3615258 ']' 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3615258 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3615258 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.406 09:39:49 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3615258' 00:06:12.407 killing process with pid 3615258 00:06:12.407 09:39:49 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3615258 00:06:12.407 09:39:49 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3615258 00:06:12.665 spdk_app_start is called in Round 0. 00:06:12.665 Shutdown signal received, stop current app iteration 00:06:12.665 Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 reinitialization... 00:06:12.665 spdk_app_start is called in Round 1. 00:06:12.665 Shutdown signal received, stop current app iteration 00:06:12.665 Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 reinitialization... 00:06:12.665 spdk_app_start is called in Round 2. 00:06:12.665 Shutdown signal received, stop current app iteration 00:06:12.665 Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 reinitialization... 00:06:12.665 spdk_app_start is called in Round 3. 00:06:12.665 Shutdown signal received, stop current app iteration 00:06:12.665 09:39:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:12.665 09:39:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:12.665 00:06:12.665 real 0m18.772s 00:06:12.665 user 0m41.382s 00:06:12.665 sys 0m3.274s 00:06:12.665 09:39:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.665 09:39:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.665 ************************************ 00:06:12.665 END TEST app_repeat 00:06:12.665 ************************************ 00:06:12.665 09:39:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:12.665 09:39:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.665 09:39:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.665 09:39:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.665 09:39:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.665 ************************************ 00:06:12.665 START TEST cpu_locks 00:06:12.665 ************************************ 00:06:12.665 09:39:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.924 * Looking for test storage... 00:06:12.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:12.924 09:39:49 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.924 09:39:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.924 09:39:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.924 09:39:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:12.924 09:39:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:12.925 09:39:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.925 09:39:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:12.925 09:39:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.925 09:39:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.925 09:39:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.925 09:39:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:12.925 09:39:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.925 09:39:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.925 --rc genhtml_branch_coverage=1 00:06:12.925 --rc genhtml_function_coverage=1 00:06:12.925 --rc genhtml_legend=1 00:06:12.925 --rc geninfo_all_blocks=1 00:06:12.925 --rc geninfo_unexecuted_blocks=1 00:06:12.925 00:06:12.925 ' 00:06:12.925 09:39:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.925 --rc genhtml_branch_coverage=1 00:06:12.925 --rc genhtml_function_coverage=1 00:06:12.925 --rc genhtml_legend=1 00:06:12.925 --rc geninfo_all_blocks=1 00:06:12.925 --rc geninfo_unexecuted_blocks=1 00:06:12.925 00:06:12.925 ' 00:06:12.925 09:39:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.925 --rc genhtml_branch_coverage=1 00:06:12.925 --rc genhtml_function_coverage=1 00:06:12.925 --rc genhtml_legend=1 00:06:12.925 --rc geninfo_all_blocks=1 00:06:12.925 --rc geninfo_unexecuted_blocks=1 00:06:12.925 00:06:12.925 ' 00:06:12.925 09:39:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.925 --rc genhtml_branch_coverage=1 00:06:12.925 --rc genhtml_function_coverage=1 00:06:12.925 --rc genhtml_legend=1 00:06:12.925 --rc geninfo_all_blocks=1 00:06:12.925 --rc geninfo_unexecuted_blocks=1 00:06:12.925 00:06:12.925 ' 00:06:12.925 09:39:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.925 09:39:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.925 09:39:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.925 09:39:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.925 09:39:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.925 09:39:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.925 09:39:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.925 ************************************ 00:06:12.925 START TEST default_locks 00:06:12.925 ************************************ 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3617769 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3617769 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3617769 ']' 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.925 09:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.925 [2024-11-20 09:39:49.774156] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:12.925 [2024-11-20 09:39:49.774250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3617769 ] 00:06:13.183 [2024-11-20 09:39:49.839477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.183 [2024-11-20 09:39:49.899968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.441 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.441 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:13.441 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3617769 00:06:13.441 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3617769 00:06:13.441 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.698 lslocks: write error 00:06:13.698 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3617769 00:06:13.698 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3617769 ']' 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3617769 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3617769 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3617769' 00:06:13.699 killing process with pid 3617769 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3617769 00:06:13.699 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3617769 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3617769 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3617769 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3617769 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3617769 ']' 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3617769) - No such process 00:06:14.266 ERROR: process (pid: 3617769) is no longer running 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.266 00:06:14.266 real 0m1.189s 00:06:14.266 user 0m1.161s 00:06:14.266 sys 0m0.494s 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.266 09:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.266 ************************************ 00:06:14.266 END TEST default_locks 00:06:14.266 ************************************ 00:06:14.266 09:39:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:14.266 09:39:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.266 09:39:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.266 09:39:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.266 ************************************ 00:06:14.266 START TEST default_locks_via_rpc 00:06:14.266 ************************************ 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3617933 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3617933 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3617933 ']' 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.266 09:39:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.266 [2024-11-20 09:39:51.011810] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:14.266 [2024-11-20 09:39:51.011901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3617933 ] 00:06:14.266 [2024-11-20 09:39:51.077429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.266 [2024-11-20 09:39:51.136111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3617933 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3617933 00:06:14.525 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3617933 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3617933 ']' 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3617933 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3617933 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3617933' 00:06:14.784 killing process with pid 3617933 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3617933 00:06:14.784 09:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3617933 00:06:15.351 00:06:15.351 real 0m1.133s 00:06:15.351 user 0m1.097s 00:06:15.351 sys 0m0.497s 00:06:15.351 09:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.351 09:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.351 ************************************ 00:06:15.351 END TEST default_locks_via_rpc 00:06:15.351 ************************************ 00:06:15.351 09:39:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.351 09:39:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.351 09:39:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.351 09:39:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.351 ************************************ 00:06:15.352 START TEST non_locking_app_on_locked_coremask 00:06:15.352 ************************************ 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3618093 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3618093 /var/tmp/spdk.sock 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3618093 ']' 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.352 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 [2024-11-20 09:39:52.197712] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:15.352 [2024-11-20 09:39:52.197812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618093 ] 00:06:15.352 [2024-11-20 09:39:52.263130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.610 [2024-11-20 09:39:52.322187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3618222 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3618222 /var/tmp/spdk2.sock 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3618222 ']' 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.869 09:39:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.869 [2024-11-20 09:39:52.643372] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:15.870 [2024-11-20 09:39:52.643462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618222 ] 00:06:15.870 [2024-11-20 09:39:52.742741] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.870 [2024-11-20 09:39:52.742767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.128 [2024-11-20 09:39:52.856217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3618093 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3618093 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.065 lslocks: write error 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3618093 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3618093 ']' 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3618093 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.065 09:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3618093 00:06:17.322 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.322 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.322 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3618093' 00:06:17.322 killing process with pid 3618093 00:06:17.322 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3618093 00:06:17.322 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3618093 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3618222 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3618222 ']' 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3618222 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3618222 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3618222' 00:06:18.256 killing process with pid 3618222 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3618222 00:06:18.256 09:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3618222 00:06:18.515 00:06:18.515 real 0m3.118s 00:06:18.515 user 0m3.373s 00:06:18.515 sys 0m0.992s 00:06:18.515 09:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.515 09:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.515 ************************************ 00:06:18.515 END TEST non_locking_app_on_locked_coremask 00:06:18.515 ************************************ 00:06:18.515 09:39:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.515 09:39:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.515 09:39:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.515 09:39:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.515 ************************************ 00:06:18.515 START TEST locking_app_on_unlocked_coremask 00:06:18.515 ************************************ 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3618527 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3618527 /var/tmp/spdk.sock 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3618527 ']' 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.515 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.515 [2024-11-20 09:39:55.368082] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:18.515 [2024-11-20 09:39:55.368177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618527 ] 00:06:18.773 [2024-11-20 09:39:55.435505] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.773 [2024-11-20 09:39:55.435543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.773 [2024-11-20 09:39:55.495067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3618568 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3618568 /var/tmp/spdk2.sock 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3618568 ']' 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.032 09:39:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.032 [2024-11-20 09:39:55.812637] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:19.032 [2024-11-20 09:39:55.812749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618568 ] 00:06:19.032 [2024-11-20 09:39:55.920399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.290 [2024-11-20 09:39:56.041889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.223 09:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.223 09:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.223 09:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3618568 00:06:20.223 09:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3618568 00:06:20.223 09:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.481 lslocks: write error 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3618527 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3618527 ']' 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3618527 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3618527 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3618527' 00:06:20.481 killing process with pid 3618527 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3618527 00:06:20.481 09:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3618527 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3618568 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3618568 ']' 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3618568 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3618568 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3618568' 00:06:21.415 killing process with pid 3618568 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3618568 00:06:21.415 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3618568 00:06:21.672 00:06:21.672 real 0m3.204s 00:06:21.672 user 0m3.433s 00:06:21.672 sys 0m1.033s 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.672 ************************************ 00:06:21.672 END TEST locking_app_on_unlocked_coremask 00:06:21.672 ************************************ 00:06:21.672 09:39:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:21.672 09:39:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.672 09:39:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.672 09:39:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.672 ************************************ 00:06:21.672 START TEST locking_app_on_locked_coremask 00:06:21.672 ************************************ 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3618964 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3618964 /var/tmp/spdk.sock 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3618964 ']' 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.672 09:39:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.929 [2024-11-20 09:39:58.622786] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:21.929 [2024-11-20 09:39:58.622879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618964 ] 00:06:21.929 [2024-11-20 09:39:58.688905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.929 [2024-11-20 09:39:58.748809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3618975 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3618975 /var/tmp/spdk2.sock 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3618975 /var/tmp/spdk2.sock 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3618975 /var/tmp/spdk2.sock 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3618975 ']' 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.186 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.187 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.187 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.187 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.187 [2024-11-20 09:39:59.058886] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:22.187 [2024-11-20 09:39:59.058970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618975 ] 00:06:22.444 [2024-11-20 09:39:59.158816] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3618964 has claimed it. 00:06:22.444 [2024-11-20 09:39:59.158871] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3618975) - No such process 00:06:23.008 ERROR: process (pid: 3618975) is no longer running 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3618964 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3618964 00:06:23.008 09:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.266 lslocks: write error 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3618964 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3618964 ']' 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3618964 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3618964 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3618964' 00:06:23.266 killing process with pid 3618964 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3618964 00:06:23.266 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3618964 00:06:23.830 00:06:23.830 real 0m1.928s 00:06:23.830 user 0m2.161s 00:06:23.830 sys 0m0.605s 00:06:23.830 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.830 09:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.830 ************************************ 00:06:23.830 END TEST locking_app_on_locked_coremask 00:06:23.830 ************************************ 00:06:23.830 09:40:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.830 09:40:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.830 09:40:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.830 09:40:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.830 ************************************ 00:06:23.830 START TEST locking_overlapped_coremask 00:06:23.830 ************************************ 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3619260 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3619260 /var/tmp/spdk.sock 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3619260 ']' 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.830 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.830 [2024-11-20 09:40:00.602495] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:23.830 [2024-11-20 09:40:00.602595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619260 ] 00:06:23.830 [2024-11-20 09:40:00.668583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.830 [2024-11-20 09:40:00.724802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.830 [2024-11-20 09:40:00.724865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.830 [2024-11-20 09:40:00.724868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3619271 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3619271 /var/tmp/spdk2.sock 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3619271 /var/tmp/spdk2.sock 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3619271 /var/tmp/spdk2.sock 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3619271 ']' 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.089 09:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.346 [2024-11-20 09:40:01.048623] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:24.346 [2024-11-20 09:40:01.048707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619271 ] 00:06:24.346 [2024-11-20 09:40:01.153071] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3619260 has claimed it. 00:06:24.346 [2024-11-20 09:40:01.153134] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3619271) - No such process 00:06:24.911 ERROR: process (pid: 3619271) is no longer running 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3619260 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3619260 ']' 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3619260 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3619260 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3619260' 00:06:24.911 killing process with pid 3619260 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3619260 00:06:24.911 09:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3619260 00:06:25.476 00:06:25.476 real 0m1.673s 00:06:25.476 user 0m4.659s 00:06:25.476 sys 0m0.466s 00:06:25.476 09:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.476 09:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.476 ************************************ 00:06:25.476 END TEST locking_overlapped_coremask 00:06:25.476 ************************************ 00:06:25.476 09:40:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:25.476 09:40:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.476 09:40:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.476 09:40:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.477 ************************************ 00:06:25.477 START TEST locking_overlapped_coremask_via_rpc 00:06:25.477 ************************************ 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3619437 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3619437 /var/tmp/spdk.sock 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3619437 ']' 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.477 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.477 [2024-11-20 09:40:02.326518] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:25.477 [2024-11-20 09:40:02.326616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619437 ] 00:06:25.734 [2024-11-20 09:40:02.393647] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.734 [2024-11-20 09:40:02.393677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.734 [2024-11-20 09:40:02.449043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.734 [2024-11-20 09:40:02.449151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.734 [2024-11-20 09:40:02.449154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3619563 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3619563 /var/tmp/spdk2.sock 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3619563 ']' 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.993 09:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.993 [2024-11-20 09:40:02.784156] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:25.993 [2024-11-20 09:40:02.784247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619563 ] 00:06:25.993 [2024-11-20 09:40:02.888161] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.993 [2024-11-20 09:40:02.888201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.283 [2024-11-20 09:40:03.016243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.283 [2024-11-20 09:40:03.016312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.283 [2024-11-20 09:40:03.016314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.873 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.131 [2024-11-20 09:40:03.790406] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3619437 has claimed it. 00:06:27.131 request: 00:06:27.131 { 00:06:27.131 "method": "framework_enable_cpumask_locks", 00:06:27.131 "req_id": 1 00:06:27.131 } 00:06:27.131 Got JSON-RPC error response 00:06:27.131 response: 00:06:27.131 { 00:06:27.131 "code": -32603, 00:06:27.131 "message": "Failed to claim CPU core: 2" 00:06:27.131 } 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3619437 /var/tmp/spdk.sock 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3619437 ']' 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.131 09:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.388 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.388 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.388 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3619563 /var/tmp/spdk2.sock 00:06:27.388 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3619563 ']' 00:06:27.389 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.389 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.389 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.389 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.389 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.646 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.646 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.646 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.646 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.646 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.646 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.646 00:06:27.646 real 0m2.078s 00:06:27.646 user 0m1.152s 00:06:27.646 sys 0m0.174s 00:06:27.646 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.646 09:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.646 ************************************ 00:06:27.646 END TEST locking_overlapped_coremask_via_rpc 00:06:27.646 ************************************ 00:06:27.646 09:40:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.646 09:40:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3619437 ]] 00:06:27.646 09:40:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3619437 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3619437 ']' 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3619437 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3619437 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3619437' 00:06:27.646 killing process with pid 3619437 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3619437 00:06:27.646 09:40:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3619437 00:06:28.212 09:40:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3619563 ]] 00:06:28.212 09:40:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3619563 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3619563 ']' 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3619563 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3619563 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3619563' 00:06:28.212 killing process with pid 3619563 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3619563 00:06:28.212 09:40:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3619563 00:06:28.470 09:40:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.470 09:40:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:28.470 09:40:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3619437 ]] 00:06:28.470 09:40:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3619437 00:06:28.470 09:40:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3619437 ']' 00:06:28.470 09:40:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3619437 00:06:28.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3619437) - No such process 00:06:28.470 09:40:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3619437 is not found' 00:06:28.470 Process with pid 3619437 is not found 00:06:28.470 09:40:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3619563 ]] 00:06:28.470 09:40:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3619563 00:06:28.470 09:40:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3619563 ']' 00:06:28.470 09:40:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3619563 00:06:28.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3619563) - No such process 00:06:28.470 09:40:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3619563 is not found' 00:06:28.470 Process with pid 3619563 is not found 00:06:28.470 09:40:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.470 00:06:28.470 real 0m15.776s 00:06:28.470 user 0m28.897s 00:06:28.470 sys 0m5.206s 00:06:28.470 09:40:05 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.470 09:40:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.470 ************************************ 00:06:28.470 END TEST cpu_locks 00:06:28.470 ************************************ 00:06:28.470 00:06:28.470 real 0m40.461s 00:06:28.470 user 1m19.384s 00:06:28.470 sys 0m9.306s 00:06:28.470 09:40:05 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.470 09:40:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.470 ************************************ 00:06:28.470 END TEST event 00:06:28.470 ************************************ 00:06:28.470 09:40:05 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:28.470 09:40:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.470 09:40:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.470 09:40:05 -- common/autotest_common.sh@10 -- # set +x 00:06:28.727 ************************************ 00:06:28.727 START TEST thread 00:06:28.727 ************************************ 00:06:28.727 09:40:05 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:28.727 * Looking for test storage... 00:06:28.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.728 09:40:05 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.728 09:40:05 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.728 09:40:05 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.728 09:40:05 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.728 09:40:05 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.728 09:40:05 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.728 09:40:05 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.728 09:40:05 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.728 09:40:05 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.728 09:40:05 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.728 09:40:05 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.728 09:40:05 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:28.728 09:40:05 thread -- scripts/common.sh@345 -- # : 1 00:06:28.728 09:40:05 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.728 09:40:05 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.728 09:40:05 thread -- scripts/common.sh@365 -- # decimal 1 00:06:28.728 09:40:05 thread -- scripts/common.sh@353 -- # local d=1 00:06:28.728 09:40:05 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.728 09:40:05 thread -- scripts/common.sh@355 -- # echo 1 00:06:28.728 09:40:05 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.728 09:40:05 thread -- scripts/common.sh@366 -- # decimal 2 00:06:28.728 09:40:05 thread -- scripts/common.sh@353 -- # local d=2 00:06:28.728 09:40:05 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.728 09:40:05 thread -- scripts/common.sh@355 -- # echo 2 00:06:28.728 09:40:05 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.728 09:40:05 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.728 09:40:05 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.728 09:40:05 thread -- scripts/common.sh@368 -- # return 0 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.728 --rc genhtml_branch_coverage=1 00:06:28.728 --rc genhtml_function_coverage=1 00:06:28.728 --rc genhtml_legend=1 00:06:28.728 --rc geninfo_all_blocks=1 00:06:28.728 --rc geninfo_unexecuted_blocks=1 00:06:28.728 00:06:28.728 ' 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.728 --rc genhtml_branch_coverage=1 00:06:28.728 --rc genhtml_function_coverage=1 00:06:28.728 --rc genhtml_legend=1 00:06:28.728 --rc geninfo_all_blocks=1 00:06:28.728 --rc geninfo_unexecuted_blocks=1 00:06:28.728 00:06:28.728 ' 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.728 --rc genhtml_branch_coverage=1 00:06:28.728 --rc genhtml_function_coverage=1 00:06:28.728 --rc genhtml_legend=1 00:06:28.728 --rc geninfo_all_blocks=1 00:06:28.728 --rc geninfo_unexecuted_blocks=1 00:06:28.728 00:06:28.728 ' 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.728 --rc genhtml_branch_coverage=1 00:06:28.728 --rc genhtml_function_coverage=1 00:06:28.728 --rc genhtml_legend=1 00:06:28.728 --rc geninfo_all_blocks=1 00:06:28.728 --rc geninfo_unexecuted_blocks=1 00:06:28.728 00:06:28.728 ' 00:06:28.728 09:40:05 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.728 09:40:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.728 ************************************ 00:06:28.728 START TEST thread_poller_perf 00:06:28.728 ************************************ 00:06:28.728 09:40:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.728 [2024-11-20 09:40:05.579462] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:28.728 [2024-11-20 09:40:05.579519] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619954 ] 00:06:28.985 [2024-11-20 09:40:05.645589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.985 [2024-11-20 09:40:05.705837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.985 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:29.918 [2024-11-20T08:40:06.832Z] ====================================== 00:06:29.918 [2024-11-20T08:40:06.832Z] busy:2707718298 (cyc) 00:06:29.918 [2024-11-20T08:40:06.832Z] total_run_count: 365000 00:06:29.918 [2024-11-20T08:40:06.832Z] tsc_hz: 2700000000 (cyc) 00:06:29.918 [2024-11-20T08:40:06.832Z] ====================================== 00:06:29.918 [2024-11-20T08:40:06.832Z] poller_cost: 7418 (cyc), 2747 (nsec) 00:06:29.918 00:06:29.918 real 0m1.206s 00:06:29.918 user 0m1.135s 00:06:29.918 sys 0m0.065s 00:06:29.918 09:40:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.918 09:40:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.918 ************************************ 00:06:29.918 END TEST thread_poller_perf 00:06:29.918 ************************************ 00:06:29.918 09:40:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.918 09:40:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:29.918 09:40:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.918 09:40:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.918 ************************************ 00:06:29.918 START TEST thread_poller_perf 00:06:29.918 ************************************ 00:06:29.918 09:40:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:30.175 [2024-11-20 09:40:06.837219] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:30.175 [2024-11-20 09:40:06.837283] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620108 ] 00:06:30.175 [2024-11-20 09:40:06.900863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.175 [2024-11-20 09:40:06.957133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.175 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:31.546 [2024-11-20T08:40:08.460Z] ====================================== 00:06:31.546 [2024-11-20T08:40:08.460Z] busy:2702608308 (cyc) 00:06:31.546 [2024-11-20T08:40:08.460Z] total_run_count: 4834000 00:06:31.546 [2024-11-20T08:40:08.460Z] tsc_hz: 2700000000 (cyc) 00:06:31.546 [2024-11-20T08:40:08.460Z] ====================================== 00:06:31.546 [2024-11-20T08:40:08.460Z] poller_cost: 559 (cyc), 207 (nsec) 00:06:31.546 00:06:31.546 real 0m1.199s 00:06:31.546 user 0m1.129s 00:06:31.546 sys 0m0.065s 00:06:31.546 09:40:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.546 09:40:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.546 ************************************ 00:06:31.546 END TEST thread_poller_perf 00:06:31.546 ************************************ 00:06:31.546 09:40:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:31.546 00:06:31.546 real 0m2.645s 00:06:31.546 user 0m2.395s 00:06:31.546 sys 0m0.254s 00:06:31.546 09:40:08 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.546 09:40:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.546 ************************************ 00:06:31.546 END TEST thread 00:06:31.546 ************************************ 00:06:31.546 09:40:08 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:31.546 09:40:08 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:31.546 09:40:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.546 09:40:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.546 09:40:08 -- common/autotest_common.sh@10 -- # set +x 00:06:31.546 ************************************ 00:06:31.546 START TEST app_cmdline 00:06:31.546 ************************************ 00:06:31.546 09:40:08 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:31.546 * Looking for test storage... 00:06:31.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:31.546 09:40:08 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.546 09:40:08 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.546 09:40:08 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.546 09:40:08 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.546 09:40:08 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:31.546 09:40:08 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.546 09:40:08 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.546 --rc genhtml_branch_coverage=1 00:06:31.546 --rc genhtml_function_coverage=1 00:06:31.546 --rc genhtml_legend=1 00:06:31.546 --rc geninfo_all_blocks=1 00:06:31.546 --rc geninfo_unexecuted_blocks=1 00:06:31.546 00:06:31.546 ' 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.547 --rc genhtml_branch_coverage=1 00:06:31.547 --rc genhtml_function_coverage=1 00:06:31.547 --rc genhtml_legend=1 00:06:31.547 --rc geninfo_all_blocks=1 00:06:31.547 --rc geninfo_unexecuted_blocks=1 00:06:31.547 00:06:31.547 ' 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.547 --rc genhtml_branch_coverage=1 00:06:31.547 --rc genhtml_function_coverage=1 00:06:31.547 --rc genhtml_legend=1 00:06:31.547 --rc geninfo_all_blocks=1 00:06:31.547 --rc geninfo_unexecuted_blocks=1 00:06:31.547 00:06:31.547 ' 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.547 --rc genhtml_branch_coverage=1 00:06:31.547 --rc genhtml_function_coverage=1 00:06:31.547 --rc genhtml_legend=1 00:06:31.547 --rc geninfo_all_blocks=1 00:06:31.547 --rc geninfo_unexecuted_blocks=1 00:06:31.547 00:06:31.547 ' 00:06:31.547 09:40:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:31.547 09:40:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3620421 00:06:31.547 09:40:08 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:31.547 09:40:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3620421 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3620421 ']' 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.547 09:40:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.547 [2024-11-20 09:40:08.292003] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:31.547 [2024-11-20 09:40:08.292093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620421 ] 00:06:31.547 [2024-11-20 09:40:08.356097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.547 [2024-11-20 09:40:08.413501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.804 09:40:08 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.804 09:40:08 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:31.804 09:40:08 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:32.062 { 00:06:32.062 "version": "SPDK v25.01-pre git sha1 f549a9953", 00:06:32.062 "fields": { 00:06:32.062 "major": 25, 00:06:32.062 "minor": 1, 00:06:32.062 "patch": 0, 00:06:32.062 "suffix": "-pre", 00:06:32.062 "commit": "f549a9953" 00:06:32.062 } 00:06:32.062 } 00:06:32.062 09:40:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:32.062 09:40:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:32.062 09:40:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:32.062 09:40:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:32.062 09:40:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:32.062 09:40:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:32.062 09:40:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.062 09:40:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:32.062 09:40:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.062 09:40:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.320 09:40:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:32.320 09:40:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:32.320 09:40:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:32.320 09:40:08 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.320 request: 00:06:32.320 { 00:06:32.320 "method": "env_dpdk_get_mem_stats", 00:06:32.320 "req_id": 1 00:06:32.320 } 00:06:32.320 Got JSON-RPC error response 00:06:32.320 response: 00:06:32.320 { 00:06:32.320 "code": -32601, 00:06:32.320 "message": "Method not found" 00:06:32.320 } 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.577 09:40:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3620421 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3620421 ']' 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3620421 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3620421 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3620421' 00:06:32.577 killing process with pid 3620421 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 3620421 00:06:32.577 09:40:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 3620421 00:06:32.834 00:06:32.834 real 0m1.609s 00:06:32.834 user 0m1.976s 00:06:32.834 sys 0m0.483s 00:06:32.834 09:40:09 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.834 09:40:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.834 ************************************ 00:06:32.834 END TEST app_cmdline 00:06:32.834 ************************************ 00:06:32.834 09:40:09 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.834 09:40:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.834 09:40:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.834 09:40:09 -- common/autotest_common.sh@10 -- # set +x 00:06:33.091 ************************************ 00:06:33.091 START TEST version 00:06:33.091 ************************************ 00:06:33.091 09:40:09 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:33.091 * Looking for test storage... 00:06:33.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:33.091 09:40:09 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.091 09:40:09 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.091 09:40:09 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.091 09:40:09 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.091 09:40:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.091 09:40:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.091 09:40:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.091 09:40:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.091 09:40:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.091 09:40:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.091 09:40:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.091 09:40:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.091 09:40:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.091 09:40:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.091 09:40:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.091 09:40:09 version -- scripts/common.sh@344 -- # case "$op" in 00:06:33.091 09:40:09 version -- scripts/common.sh@345 -- # : 1 00:06:33.091 09:40:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.091 09:40:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.091 09:40:09 version -- scripts/common.sh@365 -- # decimal 1 00:06:33.091 09:40:09 version -- scripts/common.sh@353 -- # local d=1 00:06:33.091 09:40:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.091 09:40:09 version -- scripts/common.sh@355 -- # echo 1 00:06:33.091 09:40:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.091 09:40:09 version -- scripts/common.sh@366 -- # decimal 2 00:06:33.092 09:40:09 version -- scripts/common.sh@353 -- # local d=2 00:06:33.092 09:40:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.092 09:40:09 version -- scripts/common.sh@355 -- # echo 2 00:06:33.092 09:40:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.092 09:40:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.092 09:40:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.092 09:40:09 version -- scripts/common.sh@368 -- # return 0 00:06:33.092 09:40:09 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.092 09:40:09 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.092 --rc genhtml_branch_coverage=1 00:06:33.092 --rc genhtml_function_coverage=1 00:06:33.092 --rc genhtml_legend=1 00:06:33.092 --rc geninfo_all_blocks=1 00:06:33.092 --rc geninfo_unexecuted_blocks=1 00:06:33.092 00:06:33.092 ' 00:06:33.092 09:40:09 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.092 --rc genhtml_branch_coverage=1 00:06:33.092 --rc genhtml_function_coverage=1 00:06:33.092 --rc genhtml_legend=1 00:06:33.092 --rc geninfo_all_blocks=1 00:06:33.092 --rc geninfo_unexecuted_blocks=1 00:06:33.092 00:06:33.092 ' 00:06:33.092 09:40:09 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.092 --rc genhtml_branch_coverage=1 00:06:33.092 --rc genhtml_function_coverage=1 00:06:33.092 --rc genhtml_legend=1 00:06:33.092 --rc geninfo_all_blocks=1 00:06:33.092 --rc geninfo_unexecuted_blocks=1 00:06:33.092 00:06:33.092 ' 00:06:33.092 09:40:09 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.092 --rc genhtml_branch_coverage=1 00:06:33.092 --rc genhtml_function_coverage=1 00:06:33.092 --rc genhtml_legend=1 00:06:33.092 --rc geninfo_all_blocks=1 00:06:33.092 --rc geninfo_unexecuted_blocks=1 00:06:33.092 00:06:33.092 ' 00:06:33.092 09:40:09 version -- app/version.sh@17 -- # get_header_version major 00:06:33.092 09:40:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:33.092 09:40:09 version -- app/version.sh@14 -- # cut -f2 00:06:33.092 09:40:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:33.092 09:40:09 version -- app/version.sh@17 -- # major=25 00:06:33.092 09:40:09 version -- app/version.sh@18 -- # get_header_version minor 00:06:33.092 09:40:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:33.092 09:40:09 version -- app/version.sh@14 -- # cut -f2 00:06:33.092 09:40:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:33.092 09:40:09 version -- app/version.sh@18 -- # minor=1 00:06:33.092 09:40:09 version -- app/version.sh@19 -- # get_header_version patch 00:06:33.092 09:40:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:33.092 09:40:09 version -- app/version.sh@14 -- # cut -f2 00:06:33.092 09:40:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:33.092 09:40:09 version -- app/version.sh@19 -- # patch=0 00:06:33.092 09:40:09 version -- app/version.sh@20 -- # get_header_version suffix 00:06:33.092 09:40:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:33.092 09:40:09 version -- app/version.sh@14 -- # cut -f2 00:06:33.092 09:40:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:33.092 09:40:09 version -- app/version.sh@20 -- # suffix=-pre 00:06:33.092 09:40:09 version -- app/version.sh@22 -- # version=25.1 00:06:33.092 09:40:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:33.092 09:40:09 version -- app/version.sh@28 -- # version=25.1rc0 00:06:33.092 09:40:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:33.092 09:40:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:33.092 09:40:09 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:33.092 09:40:09 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:33.092 00:06:33.092 real 0m0.196s 00:06:33.092 user 0m0.136s 00:06:33.092 sys 0m0.085s 00:06:33.092 09:40:09 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.092 09:40:09 version -- common/autotest_common.sh@10 -- # set +x 00:06:33.092 ************************************ 00:06:33.092 END TEST version 00:06:33.092 ************************************ 00:06:33.092 09:40:09 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:33.092 09:40:09 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:33.092 09:40:09 -- spdk/autotest.sh@194 -- # uname -s 00:06:33.092 09:40:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:33.092 09:40:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:33.092 09:40:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:33.092 09:40:09 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:33.092 09:40:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:33.092 09:40:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:33.092 09:40:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.092 09:40:09 -- common/autotest_common.sh@10 -- # set +x 00:06:33.092 09:40:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:33.092 09:40:09 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:33.092 09:40:09 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:33.092 09:40:09 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:33.092 09:40:09 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:33.092 09:40:09 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:33.092 09:40:09 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:33.092 09:40:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.092 09:40:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.092 09:40:09 -- common/autotest_common.sh@10 -- # set +x 00:06:33.350 ************************************ 00:06:33.350 START TEST nvmf_tcp 00:06:33.350 ************************************ 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:33.350 * Looking for test storage... 00:06:33.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.350 09:40:10 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.350 --rc genhtml_branch_coverage=1 00:06:33.350 --rc genhtml_function_coverage=1 00:06:33.350 --rc genhtml_legend=1 00:06:33.350 --rc geninfo_all_blocks=1 00:06:33.350 --rc geninfo_unexecuted_blocks=1 00:06:33.350 00:06:33.350 ' 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.350 --rc genhtml_branch_coverage=1 00:06:33.350 --rc genhtml_function_coverage=1 00:06:33.350 --rc genhtml_legend=1 00:06:33.350 --rc geninfo_all_blocks=1 00:06:33.350 --rc geninfo_unexecuted_blocks=1 00:06:33.350 00:06:33.350 ' 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.350 --rc genhtml_branch_coverage=1 00:06:33.350 --rc genhtml_function_coverage=1 00:06:33.350 --rc genhtml_legend=1 00:06:33.350 --rc geninfo_all_blocks=1 00:06:33.350 --rc geninfo_unexecuted_blocks=1 00:06:33.350 00:06:33.350 ' 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.350 --rc genhtml_branch_coverage=1 00:06:33.350 --rc genhtml_function_coverage=1 00:06:33.350 --rc genhtml_legend=1 00:06:33.350 --rc geninfo_all_blocks=1 00:06:33.350 --rc geninfo_unexecuted_blocks=1 00:06:33.350 00:06:33.350 ' 00:06:33.350 09:40:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:33.350 09:40:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:33.350 09:40:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.350 09:40:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.350 ************************************ 00:06:33.350 START TEST nvmf_target_core 00:06:33.350 ************************************ 00:06:33.350 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:33.350 * Looking for test storage... 00:06:33.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:33.350 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.350 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.350 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.612 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.612 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.612 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.612 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.612 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.613 --rc genhtml_branch_coverage=1 00:06:33.613 --rc genhtml_function_coverage=1 00:06:33.613 --rc genhtml_legend=1 00:06:33.613 --rc geninfo_all_blocks=1 00:06:33.613 --rc geninfo_unexecuted_blocks=1 00:06:33.613 00:06:33.613 ' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.613 --rc genhtml_branch_coverage=1 00:06:33.613 --rc genhtml_function_coverage=1 00:06:33.613 --rc genhtml_legend=1 00:06:33.613 --rc geninfo_all_blocks=1 00:06:33.613 --rc geninfo_unexecuted_blocks=1 00:06:33.613 00:06:33.613 ' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.613 --rc genhtml_branch_coverage=1 00:06:33.613 --rc genhtml_function_coverage=1 00:06:33.613 --rc genhtml_legend=1 00:06:33.613 --rc geninfo_all_blocks=1 00:06:33.613 --rc geninfo_unexecuted_blocks=1 00:06:33.613 00:06:33.613 ' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.613 --rc genhtml_branch_coverage=1 00:06:33.613 --rc genhtml_function_coverage=1 00:06:33.613 --rc genhtml_legend=1 00:06:33.613 --rc geninfo_all_blocks=1 00:06:33.613 --rc geninfo_unexecuted_blocks=1 00:06:33.613 00:06:33.613 ' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.613 ************************************ 00:06:33.613 START TEST nvmf_abort 00:06:33.613 ************************************ 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:33.613 * Looking for test storage... 00:06:33.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.613 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.614 --rc genhtml_branch_coverage=1 00:06:33.614 --rc genhtml_function_coverage=1 00:06:33.614 --rc genhtml_legend=1 00:06:33.614 --rc geninfo_all_blocks=1 00:06:33.614 --rc geninfo_unexecuted_blocks=1 00:06:33.614 00:06:33.614 ' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.614 --rc genhtml_branch_coverage=1 00:06:33.614 --rc genhtml_function_coverage=1 00:06:33.614 --rc genhtml_legend=1 00:06:33.614 --rc geninfo_all_blocks=1 00:06:33.614 --rc geninfo_unexecuted_blocks=1 00:06:33.614 00:06:33.614 ' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.614 --rc genhtml_branch_coverage=1 00:06:33.614 --rc genhtml_function_coverage=1 00:06:33.614 --rc genhtml_legend=1 00:06:33.614 --rc geninfo_all_blocks=1 00:06:33.614 --rc geninfo_unexecuted_blocks=1 00:06:33.614 00:06:33.614 ' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.614 --rc genhtml_branch_coverage=1 00:06:33.614 --rc genhtml_function_coverage=1 00:06:33.614 --rc genhtml_legend=1 00:06:33.614 --rc geninfo_all_blocks=1 00:06:33.614 --rc geninfo_unexecuted_blocks=1 00:06:33.614 00:06:33.614 ' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.614 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.875 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:33.875 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:33.875 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:33.875 09:40:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.404 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.404 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:36.404 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:36.404 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:36.404 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:36.404 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:36.405 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:36.405 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:36.405 Found net devices under 0000:09:00.0: cvl_0_0 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:36.405 Found net devices under 0000:09:00.1: cvl_0_1 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:36.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:36.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:06:36.405 00:06:36.405 --- 10.0.0.2 ping statistics --- 00:06:36.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.405 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:36.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:36.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:06:36.405 00:06:36.405 --- 10.0.0.1 ping statistics --- 00:06:36.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.405 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3622513 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3622513 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3622513 ']' 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.405 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 [2024-11-20 09:40:12.922271] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:36.405 [2024-11-20 09:40:12.922370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.405 [2024-11-20 09:40:12.995267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.405 [2024-11-20 09:40:13.056952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.405 [2024-11-20 09:40:13.057005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.405 [2024-11-20 09:40:13.057018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.405 [2024-11-20 09:40:13.057029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.405 [2024-11-20 09:40:13.057038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.405 [2024-11-20 09:40:13.058622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.405 [2024-11-20 09:40:13.058677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.405 [2024-11-20 09:40:13.058680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 [2024-11-20 09:40:13.212428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 Malloc0 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 Delay0 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 [2024-11-20 09:40:13.277545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.405 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:36.662 [2024-11-20 09:40:13.424417] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:39.189 Initializing NVMe Controllers 00:06:39.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:39.189 controller IO queue size 128 less than required 00:06:39.189 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:39.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:39.189 Initialization complete. Launching workers. 00:06:39.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29041 00:06:39.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29102, failed to submit 62 00:06:39.189 success 29045, unsuccessful 57, failed 0 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:39.189 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:39.190 rmmod nvme_tcp 00:06:39.190 rmmod nvme_fabrics 00:06:39.190 rmmod nvme_keyring 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3622513 ']' 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3622513 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3622513 ']' 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3622513 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3622513 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3622513' 00:06:39.190 killing process with pid 3622513 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3622513 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3622513 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.190 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:41.098 00:06:41.098 real 0m7.513s 00:06:41.098 user 0m10.740s 00:06:41.098 sys 0m2.661s 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.098 ************************************ 00:06:41.098 END TEST nvmf_abort 00:06:41.098 ************************************ 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.098 ************************************ 00:06:41.098 START TEST nvmf_ns_hotplug_stress 00:06:41.098 ************************************ 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:41.098 * Looking for test storage... 00:06:41.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.098 09:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.356 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.357 --rc genhtml_branch_coverage=1 00:06:41.357 --rc genhtml_function_coverage=1 00:06:41.357 --rc genhtml_legend=1 00:06:41.357 --rc geninfo_all_blocks=1 00:06:41.357 --rc geninfo_unexecuted_blocks=1 00:06:41.357 00:06:41.357 ' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.357 --rc genhtml_branch_coverage=1 00:06:41.357 --rc genhtml_function_coverage=1 00:06:41.357 --rc genhtml_legend=1 00:06:41.357 --rc geninfo_all_blocks=1 00:06:41.357 --rc geninfo_unexecuted_blocks=1 00:06:41.357 00:06:41.357 ' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.357 --rc genhtml_branch_coverage=1 00:06:41.357 --rc genhtml_function_coverage=1 00:06:41.357 --rc genhtml_legend=1 00:06:41.357 --rc geninfo_all_blocks=1 00:06:41.357 --rc geninfo_unexecuted_blocks=1 00:06:41.357 00:06:41.357 ' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.357 --rc genhtml_branch_coverage=1 00:06:41.357 --rc genhtml_function_coverage=1 00:06:41.357 --rc genhtml_legend=1 00:06:41.357 --rc geninfo_all_blocks=1 00:06:41.357 --rc geninfo_unexecuted_blocks=1 00:06:41.357 00:06:41.357 ' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:41.357 09:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:43.890 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:43.890 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:43.890 Found net devices under 0000:09:00.0: cvl_0_0 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.890 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:43.891 Found net devices under 0000:09:00.1: cvl_0_1 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:43.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:06:43.891 00:06:43.891 --- 10.0.0.2 ping statistics --- 00:06:43.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.891 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:06:43.891 00:06:43.891 --- 10.0.0.1 ping statistics --- 00:06:43.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.891 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3624761 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3624761 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3624761 ']' 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.891 [2024-11-20 09:40:20.540620] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:06:43.891 [2024-11-20 09:40:20.540684] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.891 [2024-11-20 09:40:20.612848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.891 [2024-11-20 09:40:20.671129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.891 [2024-11-20 09:40:20.671185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.891 [2024-11-20 09:40:20.671209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.891 [2024-11-20 09:40:20.671235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.891 [2024-11-20 09:40:20.671245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.891 [2024-11-20 09:40:20.673003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.891 [2024-11-20 09:40:20.673066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.891 [2024-11-20 09:40:20.673074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.891 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:44.147 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.148 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:44.148 09:40:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.404 [2024-11-20 09:40:21.063858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.404 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.660 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.917 [2024-11-20 09:40:21.602680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.917 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:45.175 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:45.432 Malloc0 00:06:45.432 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:45.689 Delay0 00:06:45.689 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.946 09:40:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:46.203 NULL1 00:06:46.203 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:46.459 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3625178 00:06:46.459 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:46.459 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:46.459 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.834 Read completed with error (sct=0, sc=11) 00:06:47.834 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.091 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:48.091 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:48.348 true 00:06:48.348 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:48.348 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.911 09:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.168 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:49.168 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:49.425 true 00:06:49.683 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:49.683 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.944 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.202 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:50.202 09:40:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:50.460 true 00:06:50.460 09:40:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:50.460 09:40:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.718 09:40:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.975 09:40:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:50.975 09:40:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:51.233 true 00:06:51.233 09:40:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:51.233 09:40:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.165 09:40:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.421 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:52.421 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:52.678 true 00:06:52.678 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:52.678 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.935 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.191 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:53.191 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:53.448 true 00:06:53.448 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:53.448 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.013 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.013 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:54.013 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:54.271 true 00:06:54.271 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:54.271 09:40:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.460 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.717 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:55.717 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:55.974 true 00:06:55.974 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:55.974 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.230 09:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.488 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:56.488 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:56.745 true 00:06:56.745 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:56.745 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.003 09:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.261 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:57.261 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:57.518 true 00:06:57.518 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:57.518 09:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.497 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.781 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:58.781 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:59.038 true 00:06:59.038 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:59.038 09:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.296 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.553 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:59.553 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:59.810 true 00:06:59.810 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:06:59.810 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.067 09:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.324 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:00.324 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:00.581 true 00:07:00.581 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:00.581 09:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.513 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.770 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:01.770 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:02.027 true 00:07:02.027 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:02.027 09:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.593 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.850 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:02.850 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:03.108 true 00:07:03.108 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:03.109 09:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.366 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.624 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:03.624 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:03.881 true 00:07:03.881 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:03.881 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.815 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.073 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:05.073 09:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:05.331 true 00:07:05.331 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:05.331 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.589 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.847 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:05.847 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:06.105 true 00:07:06.105 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:06.105 09:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.362 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.619 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:06.619 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:06.877 true 00:07:06.877 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:06.878 09:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.811 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.069 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:08.069 09:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:08.636 true 00:07:08.636 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:08.636 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.636 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.893 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:08.893 09:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:09.151 true 00:07:09.151 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:09.151 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.716 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.716 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:09.716 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:09.974 true 00:07:09.974 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:09.974 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.346 09:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.346 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:11.346 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:11.603 true 00:07:11.603 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:11.603 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.862 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.120 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:12.120 09:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:12.377 true 00:07:12.377 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:12.377 09:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.314 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.572 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:13.572 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:13.830 true 00:07:13.830 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:13.830 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.087 09:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.345 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:14.345 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:14.602 true 00:07:14.602 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:14.602 09:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.535 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.793 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:15.793 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:16.051 true 00:07:16.051 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:16.051 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.309 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.567 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:16.567 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:16.824 true 00:07:16.824 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:16.824 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.757 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.757 Initializing NVMe Controllers 00:07:17.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:17.757 Controller IO queue size 128, less than required. 00:07:17.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:17.757 Controller IO queue size 128, less than required. 00:07:17.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:17.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:17.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:17.757 Initialization complete. Launching workers. 00:07:17.757 ======================================================== 00:07:17.757 Latency(us) 00:07:17.757 Device Information : IOPS MiB/s Average min max 00:07:17.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 597.70 0.29 94772.24 3100.83 1011968.52 00:07:17.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8736.57 4.27 14651.39 3062.73 372361.52 00:07:17.757 ======================================================== 00:07:17.757 Total : 9334.27 4.56 19781.74 3062.73 1011968.52 00:07:17.757 00:07:17.757 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:17.757 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:18.015 true 00:07:18.015 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3625178 00:07:18.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3625178) - No such process 00:07:18.015 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3625178 00:07:18.015 09:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.273 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.531 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:18.531 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:18.531 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:18.531 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.531 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:18.788 null0 00:07:19.046 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.046 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.046 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:19.303 null1 00:07:19.303 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.303 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.303 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:19.561 null2 00:07:19.561 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.561 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.561 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:19.819 null3 00:07:19.819 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.819 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.819 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:20.077 null4 00:07:20.077 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.077 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.077 09:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:20.334 null5 00:07:20.334 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.334 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.334 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:20.592 null6 00:07:20.592 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.592 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.592 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:20.851 null7 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:20.851 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3629382 3629383 3629385 3629387 3629389 3629391 3629393 3629395 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.852 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.110 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.110 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.110 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.110 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.110 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.110 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.110 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.110 09:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.368 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.369 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.627 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.627 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.627 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.627 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.627 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.885 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.885 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.885 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.143 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.144 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.144 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.144 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.144 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.144 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.144 09:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.400 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.401 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.401 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.401 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.401 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.401 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.401 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.401 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.658 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.659 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.917 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.917 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.917 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.917 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.917 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.917 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.917 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.917 09:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.176 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.435 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.435 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.435 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.435 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.435 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.693 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.693 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.693 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.953 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.954 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.954 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.954 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.954 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.212 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.212 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.212 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.212 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.212 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.212 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.212 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.212 09:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.470 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.729 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.729 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.729 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.729 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.729 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.729 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.729 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.729 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.988 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.247 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.247 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.247 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.247 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.247 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.247 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.247 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.247 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.813 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.071 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.071 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.071 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.071 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.071 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.071 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.071 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.071 09:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.329 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.587 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.587 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.587 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.587 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.587 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.587 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.587 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.587 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.876 rmmod nvme_tcp 00:07:26.876 rmmod nvme_fabrics 00:07:26.876 rmmod nvme_keyring 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3624761 ']' 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3624761 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3624761 ']' 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3624761 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.876 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3624761 00:07:27.156 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:27.156 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:27.156 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3624761' 00:07:27.156 killing process with pid 3624761 00:07:27.156 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3624761 00:07:27.156 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3624761 00:07:27.156 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:27.156 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:27.156 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:27.156 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:27.156 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:27.156 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:27.156 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:27.157 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.157 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:27.157 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.157 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.157 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.694 00:07:29.694 real 0m48.132s 00:07:29.694 user 3m43.327s 00:07:29.694 sys 0m16.410s 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.694 ************************************ 00:07:29.694 END TEST nvmf_ns_hotplug_stress 00:07:29.694 ************************************ 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.694 ************************************ 00:07:29.694 START TEST nvmf_delete_subsystem 00:07:29.694 ************************************ 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:29.694 * Looking for test storage... 00:07:29.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.694 --rc genhtml_branch_coverage=1 00:07:29.694 --rc genhtml_function_coverage=1 00:07:29.694 --rc genhtml_legend=1 00:07:29.694 --rc geninfo_all_blocks=1 00:07:29.694 --rc geninfo_unexecuted_blocks=1 00:07:29.694 00:07:29.694 ' 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.694 --rc genhtml_branch_coverage=1 00:07:29.694 --rc genhtml_function_coverage=1 00:07:29.694 --rc genhtml_legend=1 00:07:29.694 --rc geninfo_all_blocks=1 00:07:29.694 --rc geninfo_unexecuted_blocks=1 00:07:29.694 00:07:29.694 ' 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.694 --rc genhtml_branch_coverage=1 00:07:29.694 --rc genhtml_function_coverage=1 00:07:29.694 --rc genhtml_legend=1 00:07:29.694 --rc geninfo_all_blocks=1 00:07:29.694 --rc geninfo_unexecuted_blocks=1 00:07:29.694 00:07:29.694 ' 00:07:29.694 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.695 --rc genhtml_branch_coverage=1 00:07:29.695 --rc genhtml_function_coverage=1 00:07:29.695 --rc genhtml_legend=1 00:07:29.695 --rc geninfo_all_blocks=1 00:07:29.695 --rc geninfo_unexecuted_blocks=1 00:07:29.695 00:07:29.695 ' 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.695 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:31.597 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.597 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:31.598 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:31.598 Found net devices under 0000:09:00.0: cvl_0_0 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:31.598 Found net devices under 0000:09:00.1: cvl_0_1 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:07:31.598 00:07:31.598 --- 10.0.0.2 ping statistics --- 00:07:31.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.598 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:07:31.598 00:07:31.598 --- 10.0.0.1 ping statistics --- 00:07:31.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.598 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.598 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3632174 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3632174 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3632174 ']' 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.857 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.857 [2024-11-20 09:41:08.580203] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:07:31.857 [2024-11-20 09:41:08.580294] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.857 [2024-11-20 09:41:08.655433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.857 [2024-11-20 09:41:08.712046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.857 [2024-11-20 09:41:08.712105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.857 [2024-11-20 09:41:08.712119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.857 [2024-11-20 09:41:08.712130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.857 [2024-11-20 09:41:08.712140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.857 [2024-11-20 09:41:08.716324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.857 [2024-11-20 09:41:08.716335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.115 [2024-11-20 09:41:08.868762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.115 [2024-11-20 09:41:08.884961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.115 NULL1 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.115 Delay0 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3632313 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:32.115 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:32.115 [2024-11-20 09:41:08.969812] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:34.016 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.016 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.016 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 Write completed with error (sct=0, sc=8) 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 starting I/O failed: -6 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 Write completed with error (sct=0, sc=8) 00:07:34.274 starting I/O failed: -6 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 Read completed with error (sct=0, sc=8) 00:07:34.274 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 [2024-11-20 09:41:11.051997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd364a0 is same with the state(6) to be set 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.275 Write completed with error (sct=0, sc=8) 00:07:34.275 starting I/O failed: -6 00:07:34.275 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 Write completed with error (sct=0, sc=8) 00:07:34.276 Read completed with error (sct=0, sc=8) 00:07:34.276 starting I/O failed: -6 00:07:34.276 [2024-11-20 09:41:11.053075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f672c000c40 is same with the state(6) to be set 00:07:35.210 [2024-11-20 09:41:12.024210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd379a0 is same with the state(6) to be set 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Write completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.210 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 [2024-11-20 09:41:12.051489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f672c00d800 is same with the state(6) to be set 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 [2024-11-20 09:41:12.054447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd36680 is same with the state(6) to be set 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 [2024-11-20 09:41:12.054731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f672c00d020 is same with the state(6) to be set 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Write completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 Read completed with error (sct=0, sc=8) 00:07:35.211 [2024-11-20 09:41:12.054909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd362c0 is same with the state(6) to be set 00:07:35.211 Initializing NVMe Controllers 00:07:35.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:35.211 Controller IO queue size 128, less than required. 00:07:35.211 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:35.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:35.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:35.211 Initialization complete. Launching workers. 00:07:35.211 ======================================================== 00:07:35.211 Latency(us) 00:07:35.211 Device Information : IOPS MiB/s Average min max 00:07:35.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.76 0.08 904564.05 620.06 1011859.52 00:07:35.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.15 0.09 915014.34 686.96 1011872.31 00:07:35.211 ======================================================== 00:07:35.211 Total : 348.91 0.17 910019.61 620.06 1011872.31 00:07:35.211 00:07:35.211 [2024-11-20 09:41:12.055901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd379a0 (9): Bad file descriptor 00:07:35.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:35.211 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.211 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:35.211 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3632313 00:07:35.211 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3632313 00:07:35.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3632313) - No such process 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3632313 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3632313 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3632313 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.776 [2024-11-20 09:41:12.576051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3632725 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3632725 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:35.776 09:41:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.776 [2024-11-20 09:41:12.643020] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:36.340 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.340 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3632725 00:07:36.340 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.906 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.906 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3632725 00:07:36.906 09:41:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.470 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.470 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3632725 00:07:37.470 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.728 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.728 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3632725 00:07:37.728 09:41:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.292 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.292 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3632725 00:07:38.292 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.857 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.857 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3632725 00:07:38.857 09:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.114 Initializing NVMe Controllers 00:07:39.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.114 Controller IO queue size 128, less than required. 00:07:39.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:39.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:39.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:39.114 Initialization complete. Launching workers. 00:07:39.114 ======================================================== 00:07:39.114 Latency(us) 00:07:39.114 Device Information : IOPS MiB/s Average min max 00:07:39.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003321.92 1000161.58 1011667.94 00:07:39.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005982.13 1000266.71 1043723.77 00:07:39.114 ======================================================== 00:07:39.114 Total : 256.00 0.12 1004652.02 1000161.58 1043723.77 00:07:39.114 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3632725 00:07:39.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3632725) - No such process 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3632725 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:39.372 rmmod nvme_tcp 00:07:39.372 rmmod nvme_fabrics 00:07:39.372 rmmod nvme_keyring 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3632174 ']' 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3632174 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3632174 ']' 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3632174 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3632174 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3632174' 00:07:39.372 killing process with pid 3632174 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3632174 00:07:39.372 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3632174 00:07:39.630 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:39.630 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:39.630 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:39.630 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:39.630 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:39.630 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:39.630 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:39.630 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.631 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.631 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.631 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.631 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.169 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.170 00:07:42.170 real 0m12.362s 00:07:42.170 user 0m27.871s 00:07:42.170 sys 0m2.951s 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.170 ************************************ 00:07:42.170 END TEST nvmf_delete_subsystem 00:07:42.170 ************************************ 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.170 ************************************ 00:07:42.170 START TEST nvmf_host_management 00:07:42.170 ************************************ 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:42.170 * Looking for test storage... 00:07:42.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.170 --rc genhtml_branch_coverage=1 00:07:42.170 --rc genhtml_function_coverage=1 00:07:42.170 --rc genhtml_legend=1 00:07:42.170 --rc geninfo_all_blocks=1 00:07:42.170 --rc geninfo_unexecuted_blocks=1 00:07:42.170 00:07:42.170 ' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.170 --rc genhtml_branch_coverage=1 00:07:42.170 --rc genhtml_function_coverage=1 00:07:42.170 --rc genhtml_legend=1 00:07:42.170 --rc geninfo_all_blocks=1 00:07:42.170 --rc geninfo_unexecuted_blocks=1 00:07:42.170 00:07:42.170 ' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.170 --rc genhtml_branch_coverage=1 00:07:42.170 --rc genhtml_function_coverage=1 00:07:42.170 --rc genhtml_legend=1 00:07:42.170 --rc geninfo_all_blocks=1 00:07:42.170 --rc geninfo_unexecuted_blocks=1 00:07:42.170 00:07:42.170 ' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.170 --rc genhtml_branch_coverage=1 00:07:42.170 --rc genhtml_function_coverage=1 00:07:42.170 --rc genhtml_legend=1 00:07:42.170 --rc geninfo_all_blocks=1 00:07:42.170 --rc geninfo_unexecuted_blocks=1 00:07:42.170 00:07:42.170 ' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.170 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.171 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.076 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:44.077 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:44.077 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:44.077 Found net devices under 0000:09:00.0: cvl_0_0 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:44.077 Found net devices under 0000:09:00.1: cvl_0_1 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.077 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.336 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.336 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.336 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:44.336 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.336 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:44.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:44.337 00:07:44.337 --- 10.0.0.2 ping statistics --- 00:07:44.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.337 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:07:44.337 00:07:44.337 --- 10.0.0.1 ping statistics --- 00:07:44.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.337 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3635197 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3635197 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3635197 ']' 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.337 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.337 [2024-11-20 09:41:21.160060] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:07:44.337 [2024-11-20 09:41:21.160155] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.337 [2024-11-20 09:41:21.232794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.595 [2024-11-20 09:41:21.291525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.595 [2024-11-20 09:41:21.291597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.595 [2024-11-20 09:41:21.291621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.595 [2024-11-20 09:41:21.291632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.595 [2024-11-20 09:41:21.291657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.595 [2024-11-20 09:41:21.293142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.595 [2024-11-20 09:41:21.293247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.595 [2024-11-20 09:41:21.293337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:44.595 [2024-11-20 09:41:21.293342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.595 [2024-11-20 09:41:21.441380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.595 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.595 Malloc0 00:07:44.854 [2024-11-20 09:41:21.523277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3635249 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3635249 /var/tmp/bdevperf.sock 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3635249 ']' 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:44.854 { 00:07:44.854 "params": { 00:07:44.854 "name": "Nvme$subsystem", 00:07:44.854 "trtype": "$TEST_TRANSPORT", 00:07:44.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.854 "adrfam": "ipv4", 00:07:44.854 "trsvcid": "$NVMF_PORT", 00:07:44.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.854 "hdgst": ${hdgst:-false}, 00:07:44.854 "ddgst": ${ddgst:-false} 00:07:44.854 }, 00:07:44.854 "method": "bdev_nvme_attach_controller" 00:07:44.854 } 00:07:44.854 EOF 00:07:44.854 )") 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:44.854 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:44.854 "params": { 00:07:44.854 "name": "Nvme0", 00:07:44.854 "trtype": "tcp", 00:07:44.854 "traddr": "10.0.0.2", 00:07:44.854 "adrfam": "ipv4", 00:07:44.855 "trsvcid": "4420", 00:07:44.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.855 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:44.855 "hdgst": false, 00:07:44.855 "ddgst": false 00:07:44.855 }, 00:07:44.855 "method": "bdev_nvme_attach_controller" 00:07:44.855 }' 00:07:44.855 [2024-11-20 09:41:21.607427] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:07:44.855 [2024-11-20 09:41:21.607511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635249 ] 00:07:44.855 [2024-11-20 09:41:21.677068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.855 [2024-11-20 09:41:21.737499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.421 Running I/O for 10 seconds... 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:45.421 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.422 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.422 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.422 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:45.422 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:45.422 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=557 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 557 -ge 100 ']' 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.682 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.682 [2024-11-20 09:41:22.434423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.682 [2024-11-20 09:41:22.434954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.434967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.434979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.434992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2f10 is same with the state(6) to be set 00:07:45.683 [2024-11-20 09:41:22.435475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.435976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.435992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.436007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.436022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.436037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.436053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.436068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.436083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.436098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.436114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.436129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.436145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.436159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.436176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.683 [2024-11-20 09:41:22.436190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.683 [2024-11-20 09:41:22.436206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.436981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.436996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.684 [2024-11-20 09:41:22.437357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.684 [2024-11-20 09:41:22.437373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.685 [2024-11-20 09:41:22.437388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.685 [2024-11-20 09:41:22.437404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.685 [2024-11-20 09:41:22.437419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.685 [2024-11-20 09:41:22.437440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.685 [2024-11-20 09:41:22.437457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.685 [2024-11-20 09:41:22.437473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.685 [2024-11-20 09:41:22.437488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.685 [2024-11-20 09:41:22.437504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.685 [2024-11-20 09:41:22.437518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.685 [2024-11-20 09:41:22.437534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.685 [2024-11-20 09:41:22.437549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.685 [2024-11-20 09:41:22.437564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7da60 is same with the state(6) to be set 00:07:45.685 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.685 [2024-11-20 09:41:22.438811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:45.685 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:45.685 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.685 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.685 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:45.685 00:07:45.685 Latency(us) 00:07:45.685 [2024-11-20T08:41:22.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.685 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:45.685 Job: Nvme0n1 ended in about 0.39 seconds with error 00:07:45.685 Verification LBA range: start 0x0 length 0x400 00:07:45.685 Nvme0n1 : 0.39 1481.04 92.56 164.56 0.00 37762.05 6796.33 34952.53 00:07:45.685 [2024-11-20T08:41:22.599Z] =================================================================================================================== 00:07:45.685 [2024-11-20T08:41:22.599Z] Total : 1481.04 92.56 164.56 0.00 37762.05 6796.33 34952.53 00:07:45.685 [2024-11-20 09:41:22.440995] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.685 [2024-11-20 09:41:22.441037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb64a40 (9): Bad file descriptor 00:07:45.685 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.685 09:41:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:45.685 [2024-11-20 09:41:22.448502] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:46.617 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3635249 00:07:46.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3635249) - No such process 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:46.618 { 00:07:46.618 "params": { 00:07:46.618 "name": "Nvme$subsystem", 00:07:46.618 "trtype": "$TEST_TRANSPORT", 00:07:46.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.618 "adrfam": "ipv4", 00:07:46.618 "trsvcid": "$NVMF_PORT", 00:07:46.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.618 "hdgst": ${hdgst:-false}, 00:07:46.618 "ddgst": ${ddgst:-false} 00:07:46.618 }, 00:07:46.618 "method": "bdev_nvme_attach_controller" 00:07:46.618 } 00:07:46.618 EOF 00:07:46.618 )") 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:46.618 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:46.618 "params": { 00:07:46.618 "name": "Nvme0", 00:07:46.618 "trtype": "tcp", 00:07:46.618 "traddr": "10.0.0.2", 00:07:46.618 "adrfam": "ipv4", 00:07:46.618 "trsvcid": "4420", 00:07:46.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:46.618 "hdgst": false, 00:07:46.618 "ddgst": false 00:07:46.618 }, 00:07:46.618 "method": "bdev_nvme_attach_controller" 00:07:46.618 }' 00:07:46.618 [2024-11-20 09:41:23.498784] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:07:46.618 [2024-11-20 09:41:23.498875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635518 ] 00:07:46.876 [2024-11-20 09:41:23.568559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.876 [2024-11-20 09:41:23.627635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.133 Running I/O for 1 seconds... 00:07:48.506 1664.00 IOPS, 104.00 MiB/s 00:07:48.506 Latency(us) 00:07:48.506 [2024-11-20T08:41:25.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.506 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.506 Verification LBA range: start 0x0 length 0x400 00:07:48.506 Nvme0n1 : 1.02 1696.50 106.03 0.00 0.00 37112.98 6019.60 33981.63 00:07:48.506 [2024-11-20T08:41:25.420Z] =================================================================================================================== 00:07:48.506 [2024-11-20T08:41:25.420Z] Total : 1696.50 106.03 0.00 0.00 37112.98 6019.60 33981.63 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.506 rmmod nvme_tcp 00:07:48.506 rmmod nvme_fabrics 00:07:48.506 rmmod nvme_keyring 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3635197 ']' 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3635197 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3635197 ']' 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3635197 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3635197 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3635197' 00:07:48.506 killing process with pid 3635197 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3635197 00:07:48.506 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3635197 00:07:48.765 [2024-11-20 09:41:25.578388] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.765 09:41:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:51.299 00:07:51.299 real 0m9.128s 00:07:51.299 user 0m20.517s 00:07:51.299 sys 0m2.857s 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.299 ************************************ 00:07:51.299 END TEST nvmf_host_management 00:07:51.299 ************************************ 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.299 ************************************ 00:07:51.299 START TEST nvmf_lvol 00:07:51.299 ************************************ 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:51.299 * Looking for test storage... 00:07:51.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.299 --rc genhtml_branch_coverage=1 00:07:51.299 --rc genhtml_function_coverage=1 00:07:51.299 --rc genhtml_legend=1 00:07:51.299 --rc geninfo_all_blocks=1 00:07:51.299 --rc geninfo_unexecuted_blocks=1 00:07:51.299 00:07:51.299 ' 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.299 --rc genhtml_branch_coverage=1 00:07:51.299 --rc genhtml_function_coverage=1 00:07:51.299 --rc genhtml_legend=1 00:07:51.299 --rc geninfo_all_blocks=1 00:07:51.299 --rc geninfo_unexecuted_blocks=1 00:07:51.299 00:07:51.299 ' 00:07:51.299 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.299 --rc genhtml_branch_coverage=1 00:07:51.299 --rc genhtml_function_coverage=1 00:07:51.300 --rc genhtml_legend=1 00:07:51.300 --rc geninfo_all_blocks=1 00:07:51.300 --rc geninfo_unexecuted_blocks=1 00:07:51.300 00:07:51.300 ' 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.300 --rc genhtml_branch_coverage=1 00:07:51.300 --rc genhtml_function_coverage=1 00:07:51.300 --rc genhtml_legend=1 00:07:51.300 --rc geninfo_all_blocks=1 00:07:51.300 --rc geninfo_unexecuted_blocks=1 00:07:51.300 00:07:51.300 ' 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.300 09:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.198 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:53.199 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:53.199 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:53.199 Found net devices under 0000:09:00.0: cvl_0_0 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.199 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:53.199 Found net devices under 0000:09:00.1: cvl_0_1 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.199 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:07:53.457 00:07:53.457 --- 10.0.0.2 ping statistics --- 00:07:53.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.457 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:07:53.457 00:07:53.457 --- 10.0.0.1 ping statistics --- 00:07:53.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.457 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3637734 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3637734 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3637734 ']' 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.457 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 [2024-11-20 09:41:30.225043] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:07:53.457 [2024-11-20 09:41:30.225151] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.457 [2024-11-20 09:41:30.299665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.457 [2024-11-20 09:41:30.356480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.457 [2024-11-20 09:41:30.356551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.457 [2024-11-20 09:41:30.356575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.457 [2024-11-20 09:41:30.356585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.457 [2024-11-20 09:41:30.356609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.457 [2024-11-20 09:41:30.358127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.457 [2024-11-20 09:41:30.358277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.457 [2024-11-20 09:41:30.358280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.715 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.715 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:53.715 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.715 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.715 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.715 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.715 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.972 [2024-11-20 09:41:30.742930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.972 09:41:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:54.229 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:54.229 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:54.487 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:54.487 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:54.745 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:55.310 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f1461324-efc3-4fdc-8a8c-d1e674db42b3 00:07:55.310 09:41:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f1461324-efc3-4fdc-8a8c-d1e674db42b3 lvol 20 00:07:55.310 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=492e819a-ea7e-4ff6-b41c-72f75e9be9aa 00:07:55.310 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.875 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 492e819a-ea7e-4ff6-b41c-72f75e9be9aa 00:07:55.875 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.133 [2024-11-20 09:41:33.001681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.133 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.390 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3638053 00:07:56.390 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:56.390 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:57.761 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 492e819a-ea7e-4ff6-b41c-72f75e9be9aa MY_SNAPSHOT 00:07:57.761 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1313931d-0052-4f4d-bfdf-30d848e1b952 00:07:57.761 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 492e819a-ea7e-4ff6-b41c-72f75e9be9aa 30 00:07:58.326 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1313931d-0052-4f4d-bfdf-30d848e1b952 MY_CLONE 00:07:58.583 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b12503b6-aaf5-451a-9fe0-7d6bc70a9bf8 00:07:58.584 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b12503b6-aaf5-451a-9fe0-7d6bc70a9bf8 00:07:59.148 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3638053 00:08:07.371 Initializing NVMe Controllers 00:08:07.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:07.371 Controller IO queue size 128, less than required. 00:08:07.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:07.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:07.371 Initialization complete. Launching workers. 00:08:07.371 ======================================================== 00:08:07.371 Latency(us) 00:08:07.371 Device Information : IOPS MiB/s Average min max 00:08:07.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10363.20 40.48 12360.86 2027.33 75664.62 00:08:07.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10313.60 40.29 12413.87 2279.03 71462.79 00:08:07.371 ======================================================== 00:08:07.371 Total : 20676.80 80.77 12387.30 2027.33 75664.62 00:08:07.371 00:08:07.371 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.371 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 492e819a-ea7e-4ff6-b41c-72f75e9be9aa 00:08:07.371 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1461324-efc3-4fdc-8a8c-d1e674db42b3 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.630 rmmod nvme_tcp 00:08:07.630 rmmod nvme_fabrics 00:08:07.630 rmmod nvme_keyring 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3637734 ']' 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3637734 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3637734 ']' 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3637734 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.630 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3637734 00:08:07.889 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.889 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.889 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3637734' 00:08:07.889 killing process with pid 3637734 00:08:07.889 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3637734 00:08:07.889 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3637734 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.149 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.057 09:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:10.057 00:08:10.057 real 0m19.173s 00:08:10.057 user 1m5.433s 00:08:10.057 sys 0m5.386s 00:08:10.057 09:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.057 09:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.057 ************************************ 00:08:10.057 END TEST nvmf_lvol 00:08:10.057 ************************************ 00:08:10.057 09:41:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:10.057 09:41:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.057 09:41:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.057 09:41:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.057 ************************************ 00:08:10.057 START TEST nvmf_lvs_grow 00:08:10.057 ************************************ 00:08:10.057 09:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:10.316 * Looking for test storage... 00:08:10.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.316 09:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.316 09:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.316 09:41:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.316 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.316 --rc genhtml_branch_coverage=1 00:08:10.317 --rc genhtml_function_coverage=1 00:08:10.317 --rc genhtml_legend=1 00:08:10.317 --rc geninfo_all_blocks=1 00:08:10.317 --rc geninfo_unexecuted_blocks=1 00:08:10.317 00:08:10.317 ' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.317 --rc genhtml_branch_coverage=1 00:08:10.317 --rc genhtml_function_coverage=1 00:08:10.317 --rc genhtml_legend=1 00:08:10.317 --rc geninfo_all_blocks=1 00:08:10.317 --rc geninfo_unexecuted_blocks=1 00:08:10.317 00:08:10.317 ' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.317 --rc genhtml_branch_coverage=1 00:08:10.317 --rc genhtml_function_coverage=1 00:08:10.317 --rc genhtml_legend=1 00:08:10.317 --rc geninfo_all_blocks=1 00:08:10.317 --rc geninfo_unexecuted_blocks=1 00:08:10.317 00:08:10.317 ' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.317 --rc genhtml_branch_coverage=1 00:08:10.317 --rc genhtml_function_coverage=1 00:08:10.317 --rc genhtml_legend=1 00:08:10.317 --rc geninfo_all_blocks=1 00:08:10.317 --rc geninfo_unexecuted_blocks=1 00:08:10.317 00:08:10.317 ' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:10.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:10.317 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:12.848 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.848 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:12.849 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:12.849 Found net devices under 0000:09:00.0: cvl_0_0 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:12.849 Found net devices under 0000:09:00.1: cvl_0_1 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:12.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:08:12.849 00:08:12.849 --- 10.0.0.2 ping statistics --- 00:08:12.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.849 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:08:12.849 00:08:12.849 --- 10.0.0.1 ping statistics --- 00:08:12.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.849 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3641459 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3641459 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3641459 ']' 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.849 [2024-11-20 09:41:49.423567] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:12.849 [2024-11-20 09:41:49.423682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.849 [2024-11-20 09:41:49.495753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.849 [2024-11-20 09:41:49.548727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.849 [2024-11-20 09:41:49.548784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.849 [2024-11-20 09:41:49.548807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.849 [2024-11-20 09:41:49.548817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.849 [2024-11-20 09:41:49.548826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.849 [2024-11-20 09:41:49.549409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:12.849 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.850 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.850 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:13.108 [2024-11-20 09:41:49.923932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.108 ************************************ 00:08:13.108 START TEST lvs_grow_clean 00:08:13.108 ************************************ 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.108 09:41:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.367 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:13.367 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:13.933 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:13.933 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:13.933 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:13.933 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:13.933 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:13.933 09:41:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca lvol 150 00:08:14.500 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=96dc0694-b94f-459d-8e56-18fcf7fe7079 00:08:14.500 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.500 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:14.500 [2024-11-20 09:41:51.366824] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:14.500 [2024-11-20 09:41:51.366912] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:14.500 true 00:08:14.500 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:14.500 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:14.758 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:14.758 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.324 09:41:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96dc0694-b94f-459d-8e56-18fcf7fe7079 00:08:15.324 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.582 [2024-11-20 09:41:52.441980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.582 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3641897 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3641897 /var/tmp/bdevperf.sock 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3641897 ']' 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:15.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.841 09:41:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:16.100 [2024-11-20 09:41:52.784997] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:16.100 [2024-11-20 09:41:52.785084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641897 ] 00:08:16.100 [2024-11-20 09:41:52.851796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.100 [2024-11-20 09:41:52.913015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.357 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.357 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:16.357 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:16.615 Nvme0n1 00:08:16.615 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:16.874 [ 00:08:16.874 { 00:08:16.874 "name": "Nvme0n1", 00:08:16.874 "aliases": [ 00:08:16.874 "96dc0694-b94f-459d-8e56-18fcf7fe7079" 00:08:16.874 ], 00:08:16.874 "product_name": "NVMe disk", 00:08:16.874 "block_size": 4096, 00:08:16.874 "num_blocks": 38912, 00:08:16.874 "uuid": "96dc0694-b94f-459d-8e56-18fcf7fe7079", 00:08:16.874 "numa_id": 0, 00:08:16.874 "assigned_rate_limits": { 00:08:16.874 "rw_ios_per_sec": 0, 00:08:16.874 "rw_mbytes_per_sec": 0, 00:08:16.874 "r_mbytes_per_sec": 0, 00:08:16.874 "w_mbytes_per_sec": 0 00:08:16.874 }, 00:08:16.874 "claimed": false, 00:08:16.874 "zoned": false, 00:08:16.874 "supported_io_types": { 00:08:16.874 "read": true, 00:08:16.874 "write": true, 00:08:16.874 "unmap": true, 00:08:16.874 "flush": true, 00:08:16.874 "reset": true, 00:08:16.874 "nvme_admin": true, 00:08:16.874 "nvme_io": true, 00:08:16.874 "nvme_io_md": false, 00:08:16.874 "write_zeroes": true, 00:08:16.874 "zcopy": false, 00:08:16.874 "get_zone_info": false, 00:08:16.874 "zone_management": false, 00:08:16.874 "zone_append": false, 00:08:16.874 "compare": true, 00:08:16.874 "compare_and_write": true, 00:08:16.874 "abort": true, 00:08:16.874 "seek_hole": false, 00:08:16.874 "seek_data": false, 00:08:16.874 "copy": true, 00:08:16.874 "nvme_iov_md": false 00:08:16.874 }, 00:08:16.874 "memory_domains": [ 00:08:16.874 { 00:08:16.874 "dma_device_id": "system", 00:08:16.874 "dma_device_type": 1 00:08:16.874 } 00:08:16.874 ], 00:08:16.874 "driver_specific": { 00:08:16.874 "nvme": [ 00:08:16.874 { 00:08:16.874 "trid": { 00:08:16.874 "trtype": "TCP", 00:08:16.874 "adrfam": "IPv4", 00:08:16.874 "traddr": "10.0.0.2", 00:08:16.874 "trsvcid": "4420", 00:08:16.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:16.874 }, 00:08:16.874 "ctrlr_data": { 00:08:16.874 "cntlid": 1, 00:08:16.874 "vendor_id": "0x8086", 00:08:16.874 "model_number": "SPDK bdev Controller", 00:08:16.874 "serial_number": "SPDK0", 00:08:16.874 "firmware_revision": "25.01", 00:08:16.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.874 "oacs": { 00:08:16.874 "security": 0, 00:08:16.874 "format": 0, 00:08:16.874 "firmware": 0, 00:08:16.874 "ns_manage": 0 00:08:16.874 }, 00:08:16.874 "multi_ctrlr": true, 00:08:16.874 "ana_reporting": false 00:08:16.874 }, 00:08:16.874 "vs": { 00:08:16.874 "nvme_version": "1.3" 00:08:16.874 }, 00:08:16.874 "ns_data": { 00:08:16.874 "id": 1, 00:08:16.874 "can_share": true 00:08:16.874 } 00:08:16.874 } 00:08:16.874 ], 00:08:16.874 "mp_policy": "active_passive" 00:08:16.874 } 00:08:16.874 } 00:08:16.874 ] 00:08:16.874 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3641978 00:08:16.874 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:16.874 09:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.133 Running I/O for 10 seconds... 00:08:18.066 Latency(us) 00:08:18.066 [2024-11-20T08:41:54.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.066 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:08:18.066 [2024-11-20T08:41:54.980Z] =================================================================================================================== 00:08:18.066 [2024-11-20T08:41:54.980Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:08:18.066 00:08:18.999 09:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:18.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.999 Nvme0n1 : 2.00 15240.50 59.53 0.00 0.00 0.00 0.00 0.00 00:08:19.000 [2024-11-20T08:41:55.914Z] =================================================================================================================== 00:08:19.000 [2024-11-20T08:41:55.914Z] Total : 15240.50 59.53 0.00 0.00 0.00 0.00 0.00 00:08:19.000 00:08:19.258 true 00:08:19.258 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:19.258 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:19.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:19.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:19.515 09:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3641978 00:08:20.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.082 Nvme0n1 : 3.00 15368.33 60.03 0.00 0.00 0.00 0.00 0.00 00:08:20.082 [2024-11-20T08:41:56.996Z] =================================================================================================================== 00:08:20.082 [2024-11-20T08:41:56.996Z] Total : 15368.33 60.03 0.00 0.00 0.00 0.00 0.00 00:08:20.082 00:08:21.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.022 Nvme0n1 : 4.00 15464.50 60.41 0.00 0.00 0.00 0.00 0.00 00:08:21.022 [2024-11-20T08:41:57.936Z] =================================================================================================================== 00:08:21.022 [2024-11-20T08:41:57.936Z] Total : 15464.50 60.41 0.00 0.00 0.00 0.00 0.00 00:08:21.022 00:08:21.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.956 Nvme0n1 : 5.00 15559.40 60.78 0.00 0.00 0.00 0.00 0.00 00:08:21.956 [2024-11-20T08:41:58.870Z] =================================================================================================================== 00:08:21.956 [2024-11-20T08:41:58.870Z] Total : 15559.40 60.78 0.00 0.00 0.00 0.00 0.00 00:08:21.956 00:08:23.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.354 Nvme0n1 : 6.00 15612.00 60.98 0.00 0.00 0.00 0.00 0.00 00:08:23.354 [2024-11-20T08:42:00.268Z] =================================================================================================================== 00:08:23.354 [2024-11-20T08:42:00.268Z] Total : 15612.00 60.98 0.00 0.00 0.00 0.00 0.00 00:08:23.354 00:08:24.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.287 Nvme0n1 : 7.00 15595.14 60.92 0.00 0.00 0.00 0.00 0.00 00:08:24.287 [2024-11-20T08:42:01.201Z] =================================================================================================================== 00:08:24.287 [2024-11-20T08:42:01.201Z] Total : 15595.14 60.92 0.00 0.00 0.00 0.00 0.00 00:08:24.287 00:08:25.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.219 Nvme0n1 : 8.00 15630.12 61.06 0.00 0.00 0.00 0.00 0.00 00:08:25.219 [2024-11-20T08:42:02.133Z] =================================================================================================================== 00:08:25.219 [2024-11-20T08:42:02.133Z] Total : 15630.12 61.06 0.00 0.00 0.00 0.00 0.00 00:08:25.219 00:08:26.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.153 Nvme0n1 : 9.00 15671.44 61.22 0.00 0.00 0.00 0.00 0.00 00:08:26.153 [2024-11-20T08:42:03.067Z] =================================================================================================================== 00:08:26.153 [2024-11-20T08:42:03.067Z] Total : 15671.44 61.22 0.00 0.00 0.00 0.00 0.00 00:08:26.153 00:08:27.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.087 Nvme0n1 : 10.00 15698.20 61.32 0.00 0.00 0.00 0.00 0.00 00:08:27.087 [2024-11-20T08:42:04.001Z] =================================================================================================================== 00:08:27.087 [2024-11-20T08:42:04.001Z] Total : 15698.20 61.32 0.00 0.00 0.00 0.00 0.00 00:08:27.087 00:08:27.087 00:08:27.087 Latency(us) 00:08:27.087 [2024-11-20T08:42:04.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.087 Nvme0n1 : 10.00 15700.08 61.33 0.00 0.00 8148.01 4369.07 16117.00 00:08:27.087 [2024-11-20T08:42:04.001Z] =================================================================================================================== 00:08:27.087 [2024-11-20T08:42:04.001Z] Total : 15700.08 61.33 0.00 0.00 8148.01 4369.07 16117.00 00:08:27.087 { 00:08:27.087 "results": [ 00:08:27.087 { 00:08:27.087 "job": "Nvme0n1", 00:08:27.087 "core_mask": "0x2", 00:08:27.087 "workload": "randwrite", 00:08:27.087 "status": "finished", 00:08:27.087 "queue_depth": 128, 00:08:27.087 "io_size": 4096, 00:08:27.087 "runtime": 10.002877, 00:08:27.087 "iops": 15700.08308609613, 00:08:27.087 "mibps": 61.32844955506301, 00:08:27.087 "io_failed": 0, 00:08:27.087 "io_timeout": 0, 00:08:27.087 "avg_latency_us": 8148.005520213232, 00:08:27.087 "min_latency_us": 4369.066666666667, 00:08:27.087 "max_latency_us": 16117.001481481482 00:08:27.087 } 00:08:27.087 ], 00:08:27.087 "core_count": 1 00:08:27.087 } 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3641897 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3641897 ']' 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3641897 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3641897 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3641897' 00:08:27.087 killing process with pid 3641897 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3641897 00:08:27.087 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.087 00:08:27.087 Latency(us) 00:08:27.087 [2024-11-20T08:42:04.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.087 [2024-11-20T08:42:04.001Z] =================================================================================================================== 00:08:27.087 [2024-11-20T08:42:04.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.087 09:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3641897 00:08:27.346 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.603 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:27.861 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:27.861 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:28.119 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:28.119 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:28.119 09:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.377 [2024-11-20 09:42:05.240115] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:28.377 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:28.635 request: 00:08:28.635 { 00:08:28.635 "uuid": "197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca", 00:08:28.635 "method": "bdev_lvol_get_lvstores", 00:08:28.635 "req_id": 1 00:08:28.635 } 00:08:28.635 Got JSON-RPC error response 00:08:28.635 response: 00:08:28.635 { 00:08:28.635 "code": -19, 00:08:28.635 "message": "No such device" 00:08:28.635 } 00:08:28.635 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:28.635 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.635 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:28.635 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.635 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.893 aio_bdev 00:08:29.152 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 96dc0694-b94f-459d-8e56-18fcf7fe7079 00:08:29.152 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=96dc0694-b94f-459d-8e56-18fcf7fe7079 00:08:29.152 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.152 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:29.152 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.152 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.152 09:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:29.410 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 96dc0694-b94f-459d-8e56-18fcf7fe7079 -t 2000 00:08:29.669 [ 00:08:29.669 { 00:08:29.669 "name": "96dc0694-b94f-459d-8e56-18fcf7fe7079", 00:08:29.669 "aliases": [ 00:08:29.669 "lvs/lvol" 00:08:29.669 ], 00:08:29.669 "product_name": "Logical Volume", 00:08:29.669 "block_size": 4096, 00:08:29.669 "num_blocks": 38912, 00:08:29.669 "uuid": "96dc0694-b94f-459d-8e56-18fcf7fe7079", 00:08:29.669 "assigned_rate_limits": { 00:08:29.669 "rw_ios_per_sec": 0, 00:08:29.669 "rw_mbytes_per_sec": 0, 00:08:29.669 "r_mbytes_per_sec": 0, 00:08:29.669 "w_mbytes_per_sec": 0 00:08:29.669 }, 00:08:29.669 "claimed": false, 00:08:29.669 "zoned": false, 00:08:29.669 "supported_io_types": { 00:08:29.669 "read": true, 00:08:29.669 "write": true, 00:08:29.669 "unmap": true, 00:08:29.669 "flush": false, 00:08:29.669 "reset": true, 00:08:29.669 "nvme_admin": false, 00:08:29.669 "nvme_io": false, 00:08:29.669 "nvme_io_md": false, 00:08:29.669 "write_zeroes": true, 00:08:29.669 "zcopy": false, 00:08:29.669 "get_zone_info": false, 00:08:29.669 "zone_management": false, 00:08:29.669 "zone_append": false, 00:08:29.669 "compare": false, 00:08:29.669 "compare_and_write": false, 00:08:29.669 "abort": false, 00:08:29.669 "seek_hole": true, 00:08:29.669 "seek_data": true, 00:08:29.669 "copy": false, 00:08:29.669 "nvme_iov_md": false 00:08:29.669 }, 00:08:29.669 "driver_specific": { 00:08:29.669 "lvol": { 00:08:29.669 "lvol_store_uuid": "197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca", 00:08:29.669 "base_bdev": "aio_bdev", 00:08:29.669 "thin_provision": false, 00:08:29.669 "num_allocated_clusters": 38, 00:08:29.669 "snapshot": false, 00:08:29.669 "clone": false, 00:08:29.669 "esnap_clone": false 00:08:29.669 } 00:08:29.669 } 00:08:29.669 } 00:08:29.669 ] 00:08:29.669 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:29.669 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:29.669 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:29.928 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:29.928 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:29.928 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:30.186 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:30.186 09:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96dc0694-b94f-459d-8e56-18fcf7fe7079 00:08:30.444 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 197ec0d8-6ff6-4c8e-a0f3-48e92adbbeca 00:08:30.702 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.962 00:08:30.962 real 0m17.797s 00:08:30.962 user 0m17.261s 00:08:30.962 sys 0m1.895s 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:30.962 ************************************ 00:08:30.962 END TEST lvs_grow_clean 00:08:30.962 ************************************ 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.962 ************************************ 00:08:30.962 START TEST lvs_grow_dirty 00:08:30.962 ************************************ 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.962 09:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.528 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:31.529 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:31.529 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:31.529 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:31.529 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:31.786 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:31.787 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:31.787 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c199fa01-b0ff-46d7-a641-4cbd756da50a lvol 150 00:08:32.353 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=617d40df-3338-4ebb-8e65-00076319eb5c 00:08:32.353 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.353 09:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.353 [2024-11-20 09:42:09.222798] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.353 [2024-11-20 09:42:09.222894] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.353 true 00:08:32.353 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:32.353 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.611 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.611 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.869 09:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 617d40df-3338-4ebb-8e65-00076319eb5c 00:08:33.436 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:33.436 [2024-11-20 09:42:10.314141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.436 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3644590 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3644590 /var/tmp/bdevperf.sock 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3644590 ']' 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.694 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.952 [2024-11-20 09:42:10.641470] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:33.952 [2024-11-20 09:42:10.641553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644590 ] 00:08:33.952 [2024-11-20 09:42:10.708788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.952 [2024-11-20 09:42:10.767533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.210 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.210 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:34.210 09:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:34.468 Nvme0n1 00:08:34.468 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:34.726 [ 00:08:34.726 { 00:08:34.726 "name": "Nvme0n1", 00:08:34.726 "aliases": [ 00:08:34.726 "617d40df-3338-4ebb-8e65-00076319eb5c" 00:08:34.726 ], 00:08:34.726 "product_name": "NVMe disk", 00:08:34.726 "block_size": 4096, 00:08:34.726 "num_blocks": 38912, 00:08:34.726 "uuid": "617d40df-3338-4ebb-8e65-00076319eb5c", 00:08:34.726 "numa_id": 0, 00:08:34.726 "assigned_rate_limits": { 00:08:34.726 "rw_ios_per_sec": 0, 00:08:34.726 "rw_mbytes_per_sec": 0, 00:08:34.726 "r_mbytes_per_sec": 0, 00:08:34.726 "w_mbytes_per_sec": 0 00:08:34.726 }, 00:08:34.726 "claimed": false, 00:08:34.726 "zoned": false, 00:08:34.726 "supported_io_types": { 00:08:34.726 "read": true, 00:08:34.726 "write": true, 00:08:34.726 "unmap": true, 00:08:34.726 "flush": true, 00:08:34.726 "reset": true, 00:08:34.726 "nvme_admin": true, 00:08:34.726 "nvme_io": true, 00:08:34.726 "nvme_io_md": false, 00:08:34.726 "write_zeroes": true, 00:08:34.726 "zcopy": false, 00:08:34.726 "get_zone_info": false, 00:08:34.726 "zone_management": false, 00:08:34.726 "zone_append": false, 00:08:34.726 "compare": true, 00:08:34.726 "compare_and_write": true, 00:08:34.726 "abort": true, 00:08:34.726 "seek_hole": false, 00:08:34.726 "seek_data": false, 00:08:34.726 "copy": true, 00:08:34.726 "nvme_iov_md": false 00:08:34.726 }, 00:08:34.726 "memory_domains": [ 00:08:34.726 { 00:08:34.726 "dma_device_id": "system", 00:08:34.726 "dma_device_type": 1 00:08:34.726 } 00:08:34.726 ], 00:08:34.726 "driver_specific": { 00:08:34.726 "nvme": [ 00:08:34.726 { 00:08:34.726 "trid": { 00:08:34.726 "trtype": "TCP", 00:08:34.726 "adrfam": "IPv4", 00:08:34.726 "traddr": "10.0.0.2", 00:08:34.726 "trsvcid": "4420", 00:08:34.726 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:34.726 }, 00:08:34.726 "ctrlr_data": { 00:08:34.726 "cntlid": 1, 00:08:34.726 "vendor_id": "0x8086", 00:08:34.726 "model_number": "SPDK bdev Controller", 00:08:34.726 "serial_number": "SPDK0", 00:08:34.726 "firmware_revision": "25.01", 00:08:34.726 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.726 "oacs": { 00:08:34.726 "security": 0, 00:08:34.726 "format": 0, 00:08:34.726 "firmware": 0, 00:08:34.726 "ns_manage": 0 00:08:34.726 }, 00:08:34.726 "multi_ctrlr": true, 00:08:34.726 "ana_reporting": false 00:08:34.726 }, 00:08:34.726 "vs": { 00:08:34.726 "nvme_version": "1.3" 00:08:34.726 }, 00:08:34.726 "ns_data": { 00:08:34.726 "id": 1, 00:08:34.726 "can_share": true 00:08:34.726 } 00:08:34.726 } 00:08:34.726 ], 00:08:34.726 "mp_policy": "active_passive" 00:08:34.726 } 00:08:34.726 } 00:08:34.726 ] 00:08:34.985 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3644726 00:08:34.985 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.985 09:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.985 Running I/O for 10 seconds... 00:08:35.919 Latency(us) 00:08:35.919 [2024-11-20T08:42:12.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.919 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:08:35.919 [2024-11-20T08:42:12.833Z] =================================================================================================================== 00:08:35.919 [2024-11-20T08:42:12.833Z] Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:08:35.919 00:08:36.854 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:36.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.854 Nvme0n1 : 2.00 15367.50 60.03 0.00 0.00 0.00 0.00 0.00 00:08:36.854 [2024-11-20T08:42:13.768Z] =================================================================================================================== 00:08:36.854 [2024-11-20T08:42:13.768Z] Total : 15367.50 60.03 0.00 0.00 0.00 0.00 0.00 00:08:36.854 00:08:37.114 true 00:08:37.114 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:37.114 09:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.402 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.402 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.402 09:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3644726 00:08:37.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.993 Nvme0n1 : 3.00 15452.00 60.36 0.00 0.00 0.00 0.00 0.00 00:08:37.993 [2024-11-20T08:42:14.907Z] =================================================================================================================== 00:08:37.993 [2024-11-20T08:42:14.907Z] Total : 15452.00 60.36 0.00 0.00 0.00 0.00 0.00 00:08:37.993 00:08:38.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.928 Nvme0n1 : 4.00 15557.75 60.77 0.00 0.00 0.00 0.00 0.00 00:08:38.928 [2024-11-20T08:42:15.842Z] =================================================================================================================== 00:08:38.928 [2024-11-20T08:42:15.842Z] Total : 15557.75 60.77 0.00 0.00 0.00 0.00 0.00 00:08:38.928 00:08:39.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.861 Nvme0n1 : 5.00 15621.20 61.02 0.00 0.00 0.00 0.00 0.00 00:08:39.861 [2024-11-20T08:42:16.776Z] =================================================================================================================== 00:08:39.862 [2024-11-20T08:42:16.776Z] Total : 15621.20 61.02 0.00 0.00 0.00 0.00 0.00 00:08:39.862 00:08:41.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.234 Nvme0n1 : 6.00 15663.50 61.19 0.00 0.00 0.00 0.00 0.00 00:08:41.234 [2024-11-20T08:42:18.148Z] =================================================================================================================== 00:08:41.234 [2024-11-20T08:42:18.148Z] Total : 15663.50 61.19 0.00 0.00 0.00 0.00 0.00 00:08:41.234 00:08:42.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.168 Nvme0n1 : 7.00 15711.86 61.37 0.00 0.00 0.00 0.00 0.00 00:08:42.168 [2024-11-20T08:42:19.082Z] =================================================================================================================== 00:08:42.168 [2024-11-20T08:42:19.082Z] Total : 15711.86 61.37 0.00 0.00 0.00 0.00 0.00 00:08:42.168 00:08:43.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.103 Nvme0n1 : 8.00 15748.12 61.52 0.00 0.00 0.00 0.00 0.00 00:08:43.103 [2024-11-20T08:42:20.017Z] =================================================================================================================== 00:08:43.103 [2024-11-20T08:42:20.017Z] Total : 15748.12 61.52 0.00 0.00 0.00 0.00 0.00 00:08:43.103 00:08:44.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.037 Nvme0n1 : 9.00 15776.33 61.63 0.00 0.00 0.00 0.00 0.00 00:08:44.037 [2024-11-20T08:42:20.951Z] =================================================================================================================== 00:08:44.037 [2024-11-20T08:42:20.951Z] Total : 15776.33 61.63 0.00 0.00 0.00 0.00 0.00 00:08:44.037 00:08:44.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.971 Nvme0n1 : 10.00 15798.90 61.71 0.00 0.00 0.00 0.00 0.00 00:08:44.971 [2024-11-20T08:42:21.885Z] =================================================================================================================== 00:08:44.971 [2024-11-20T08:42:21.885Z] Total : 15798.90 61.71 0.00 0.00 0.00 0.00 0.00 00:08:44.971 00:08:44.971 00:08:44.971 Latency(us) 00:08:44.971 [2024-11-20T08:42:21.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.971 Nvme0n1 : 10.01 15802.06 61.73 0.00 0.00 8095.89 2160.26 15631.55 00:08:44.971 [2024-11-20T08:42:21.885Z] =================================================================================================================== 00:08:44.971 [2024-11-20T08:42:21.885Z] Total : 15802.06 61.73 0.00 0.00 8095.89 2160.26 15631.55 00:08:44.971 { 00:08:44.971 "results": [ 00:08:44.971 { 00:08:44.971 "job": "Nvme0n1", 00:08:44.971 "core_mask": "0x2", 00:08:44.971 "workload": "randwrite", 00:08:44.971 "status": "finished", 00:08:44.971 "queue_depth": 128, 00:08:44.971 "io_size": 4096, 00:08:44.971 "runtime": 10.006101, 00:08:44.971 "iops": 15802.059163704223, 00:08:44.971 "mibps": 61.72679360821962, 00:08:44.971 "io_failed": 0, 00:08:44.971 "io_timeout": 0, 00:08:44.971 "avg_latency_us": 8095.889486055685, 00:08:44.971 "min_latency_us": 2160.260740740741, 00:08:44.971 "max_latency_us": 15631.54962962963 00:08:44.971 } 00:08:44.971 ], 00:08:44.971 "core_count": 1 00:08:44.971 } 00:08:44.971 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3644590 00:08:44.971 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3644590 ']' 00:08:44.971 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3644590 00:08:44.971 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:44.971 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.971 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3644590 00:08:44.971 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.972 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.972 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3644590' 00:08:44.972 killing process with pid 3644590 00:08:44.972 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3644590 00:08:44.972 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.972 00:08:44.972 Latency(us) 00:08:44.972 [2024-11-20T08:42:21.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.972 [2024-11-20T08:42:21.886Z] =================================================================================================================== 00:08:44.972 [2024-11-20T08:42:21.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.972 09:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3644590 00:08:45.230 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.487 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.745 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:45.745 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3641459 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3641459 00:08:46.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3641459 Killed "${NVMF_APP[@]}" "$@" 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3646067 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3646067 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3646067 ']' 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.003 09:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.262 [2024-11-20 09:42:22.949262] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:46.262 [2024-11-20 09:42:22.949396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.262 [2024-11-20 09:42:23.023250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.262 [2024-11-20 09:42:23.081407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.262 [2024-11-20 09:42:23.081465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.262 [2024-11-20 09:42:23.081493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.262 [2024-11-20 09:42:23.081505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.262 [2024-11-20 09:42:23.081515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.262 [2024-11-20 09:42:23.082134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.520 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.520 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:46.520 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.520 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.520 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.520 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.520 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.778 [2024-11-20 09:42:23.470243] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:46.778 [2024-11-20 09:42:23.470399] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:46.778 [2024-11-20 09:42:23.470449] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:46.778 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:46.778 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 617d40df-3338-4ebb-8e65-00076319eb5c 00:08:46.778 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=617d40df-3338-4ebb-8e65-00076319eb5c 00:08:46.778 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.778 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:46.778 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.778 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.779 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.037 09:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 617d40df-3338-4ebb-8e65-00076319eb5c -t 2000 00:08:47.295 [ 00:08:47.295 { 00:08:47.295 "name": "617d40df-3338-4ebb-8e65-00076319eb5c", 00:08:47.295 "aliases": [ 00:08:47.295 "lvs/lvol" 00:08:47.295 ], 00:08:47.295 "product_name": "Logical Volume", 00:08:47.295 "block_size": 4096, 00:08:47.295 "num_blocks": 38912, 00:08:47.295 "uuid": "617d40df-3338-4ebb-8e65-00076319eb5c", 00:08:47.295 "assigned_rate_limits": { 00:08:47.295 "rw_ios_per_sec": 0, 00:08:47.295 "rw_mbytes_per_sec": 0, 00:08:47.295 "r_mbytes_per_sec": 0, 00:08:47.295 "w_mbytes_per_sec": 0 00:08:47.295 }, 00:08:47.295 "claimed": false, 00:08:47.295 "zoned": false, 00:08:47.295 "supported_io_types": { 00:08:47.295 "read": true, 00:08:47.295 "write": true, 00:08:47.295 "unmap": true, 00:08:47.295 "flush": false, 00:08:47.295 "reset": true, 00:08:47.295 "nvme_admin": false, 00:08:47.295 "nvme_io": false, 00:08:47.295 "nvme_io_md": false, 00:08:47.295 "write_zeroes": true, 00:08:47.295 "zcopy": false, 00:08:47.295 "get_zone_info": false, 00:08:47.295 "zone_management": false, 00:08:47.295 "zone_append": false, 00:08:47.295 "compare": false, 00:08:47.295 "compare_and_write": false, 00:08:47.295 "abort": false, 00:08:47.295 "seek_hole": true, 00:08:47.295 "seek_data": true, 00:08:47.295 "copy": false, 00:08:47.295 "nvme_iov_md": false 00:08:47.295 }, 00:08:47.295 "driver_specific": { 00:08:47.295 "lvol": { 00:08:47.295 "lvol_store_uuid": "c199fa01-b0ff-46d7-a641-4cbd756da50a", 00:08:47.295 "base_bdev": "aio_bdev", 00:08:47.295 "thin_provision": false, 00:08:47.295 "num_allocated_clusters": 38, 00:08:47.295 "snapshot": false, 00:08:47.295 "clone": false, 00:08:47.295 "esnap_clone": false 00:08:47.295 } 00:08:47.296 } 00:08:47.296 } 00:08:47.296 ] 00:08:47.296 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:47.296 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:47.296 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:47.553 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:47.553 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:47.553 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:47.811 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:47.811 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.069 [2024-11-20 09:42:24.807757] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:48.069 09:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:48.327 request: 00:08:48.327 { 00:08:48.327 "uuid": "c199fa01-b0ff-46d7-a641-4cbd756da50a", 00:08:48.327 "method": "bdev_lvol_get_lvstores", 00:08:48.327 "req_id": 1 00:08:48.327 } 00:08:48.327 Got JSON-RPC error response 00:08:48.327 response: 00:08:48.327 { 00:08:48.327 "code": -19, 00:08:48.327 "message": "No such device" 00:08:48.327 } 00:08:48.327 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:48.327 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.327 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.327 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.327 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.586 aio_bdev 00:08:48.586 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 617d40df-3338-4ebb-8e65-00076319eb5c 00:08:48.586 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=617d40df-3338-4ebb-8e65-00076319eb5c 00:08:48.586 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.586 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:48.586 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.586 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.586 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.844 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 617d40df-3338-4ebb-8e65-00076319eb5c -t 2000 00:08:49.101 [ 00:08:49.101 { 00:08:49.101 "name": "617d40df-3338-4ebb-8e65-00076319eb5c", 00:08:49.101 "aliases": [ 00:08:49.101 "lvs/lvol" 00:08:49.101 ], 00:08:49.101 "product_name": "Logical Volume", 00:08:49.101 "block_size": 4096, 00:08:49.101 "num_blocks": 38912, 00:08:49.101 "uuid": "617d40df-3338-4ebb-8e65-00076319eb5c", 00:08:49.101 "assigned_rate_limits": { 00:08:49.101 "rw_ios_per_sec": 0, 00:08:49.101 "rw_mbytes_per_sec": 0, 00:08:49.101 "r_mbytes_per_sec": 0, 00:08:49.101 "w_mbytes_per_sec": 0 00:08:49.101 }, 00:08:49.101 "claimed": false, 00:08:49.101 "zoned": false, 00:08:49.101 "supported_io_types": { 00:08:49.101 "read": true, 00:08:49.101 "write": true, 00:08:49.101 "unmap": true, 00:08:49.101 "flush": false, 00:08:49.101 "reset": true, 00:08:49.101 "nvme_admin": false, 00:08:49.101 "nvme_io": false, 00:08:49.101 "nvme_io_md": false, 00:08:49.101 "write_zeroes": true, 00:08:49.101 "zcopy": false, 00:08:49.101 "get_zone_info": false, 00:08:49.101 "zone_management": false, 00:08:49.101 "zone_append": false, 00:08:49.101 "compare": false, 00:08:49.101 "compare_and_write": false, 00:08:49.101 "abort": false, 00:08:49.101 "seek_hole": true, 00:08:49.101 "seek_data": true, 00:08:49.101 "copy": false, 00:08:49.101 "nvme_iov_md": false 00:08:49.101 }, 00:08:49.101 "driver_specific": { 00:08:49.101 "lvol": { 00:08:49.101 "lvol_store_uuid": "c199fa01-b0ff-46d7-a641-4cbd756da50a", 00:08:49.101 "base_bdev": "aio_bdev", 00:08:49.101 "thin_provision": false, 00:08:49.101 "num_allocated_clusters": 38, 00:08:49.101 "snapshot": false, 00:08:49.101 "clone": false, 00:08:49.101 "esnap_clone": false 00:08:49.101 } 00:08:49.101 } 00:08:49.101 } 00:08:49.101 ] 00:08:49.101 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:49.101 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:49.101 09:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:49.359 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:49.359 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:49.359 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:49.617 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:49.617 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 617d40df-3338-4ebb-8e65-00076319eb5c 00:08:50.184 09:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c199fa01-b0ff-46d7-a641-4cbd756da50a 00:08:50.184 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.442 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.699 00:08:50.699 real 0m19.555s 00:08:50.699 user 0m49.458s 00:08:50.699 sys 0m4.627s 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.699 ************************************ 00:08:50.699 END TEST lvs_grow_dirty 00:08:50.699 ************************************ 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:50.699 nvmf_trace.0 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.699 rmmod nvme_tcp 00:08:50.699 rmmod nvme_fabrics 00:08:50.699 rmmod nvme_keyring 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3646067 ']' 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3646067 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3646067 ']' 00:08:50.699 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3646067 00:08:50.700 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:50.700 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.700 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3646067 00:08:50.700 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.700 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.700 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3646067' 00:08:50.700 killing process with pid 3646067 00:08:50.700 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3646067 00:08:50.700 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3646067 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.959 09:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.498 00:08:53.498 real 0m42.873s 00:08:53.498 user 1m12.731s 00:08:53.498 sys 0m8.592s 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.498 ************************************ 00:08:53.498 END TEST nvmf_lvs_grow 00:08:53.498 ************************************ 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.498 ************************************ 00:08:53.498 START TEST nvmf_bdev_io_wait 00:08:53.498 ************************************ 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:53.498 * Looking for test storage... 00:08:53.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.498 09:42:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:53.498 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:53.498 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.498 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.498 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:53.498 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:53.498 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.498 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.499 --rc genhtml_branch_coverage=1 00:08:53.499 --rc genhtml_function_coverage=1 00:08:53.499 --rc genhtml_legend=1 00:08:53.499 --rc geninfo_all_blocks=1 00:08:53.499 --rc geninfo_unexecuted_blocks=1 00:08:53.499 00:08:53.499 ' 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.499 --rc genhtml_branch_coverage=1 00:08:53.499 --rc genhtml_function_coverage=1 00:08:53.499 --rc genhtml_legend=1 00:08:53.499 --rc geninfo_all_blocks=1 00:08:53.499 --rc geninfo_unexecuted_blocks=1 00:08:53.499 00:08:53.499 ' 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.499 --rc genhtml_branch_coverage=1 00:08:53.499 --rc genhtml_function_coverage=1 00:08:53.499 --rc genhtml_legend=1 00:08:53.499 --rc geninfo_all_blocks=1 00:08:53.499 --rc geninfo_unexecuted_blocks=1 00:08:53.499 00:08:53.499 ' 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.499 --rc genhtml_branch_coverage=1 00:08:53.499 --rc genhtml_function_coverage=1 00:08:53.499 --rc genhtml_legend=1 00:08:53.499 --rc geninfo_all_blocks=1 00:08:53.499 --rc geninfo_unexecuted_blocks=1 00:08:53.499 00:08:53.499 ' 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.499 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.500 09:42:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:55.405 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:55.405 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:55.405 Found net devices under 0000:09:00.0: cvl_0_0 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.405 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:55.406 Found net devices under 0000:09:00.1: cvl_0_1 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:08:55.406 00:08:55.406 --- 10.0.0.2 ping statistics --- 00:08:55.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.406 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:08:55.406 00:08:55.406 --- 10.0.0.1 ping statistics --- 00:08:55.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.406 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.406 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3648718 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3648718 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3648718 ']' 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.664 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.664 [2024-11-20 09:42:32.381429] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:55.664 [2024-11-20 09:42:32.381525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.664 [2024-11-20 09:42:32.458336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.664 [2024-11-20 09:42:32.520844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.664 [2024-11-20 09:42:32.520895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.665 [2024-11-20 09:42:32.520923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.665 [2024-11-20 09:42:32.520933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.665 [2024-11-20 09:42:32.520943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.665 [2024-11-20 09:42:32.522536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.665 [2024-11-20 09:42:32.522594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.665 [2024-11-20 09:42:32.522659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.665 [2024-11-20 09:42:32.522662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.923 [2024-11-20 09:42:32.734223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.923 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 Malloc0 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 [2024-11-20 09:42:32.788044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3648748 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3648750 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.924 { 00:08:55.924 "params": { 00:08:55.924 "name": "Nvme$subsystem", 00:08:55.924 "trtype": "$TEST_TRANSPORT", 00:08:55.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.924 "adrfam": "ipv4", 00:08:55.924 "trsvcid": "$NVMF_PORT", 00:08:55.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.924 "hdgst": ${hdgst:-false}, 00:08:55.924 "ddgst": ${ddgst:-false} 00:08:55.924 }, 00:08:55.924 "method": "bdev_nvme_attach_controller" 00:08:55.924 } 00:08:55.924 EOF 00:08:55.924 )") 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3648752 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.924 { 00:08:55.924 "params": { 00:08:55.924 "name": "Nvme$subsystem", 00:08:55.924 "trtype": "$TEST_TRANSPORT", 00:08:55.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.924 "adrfam": "ipv4", 00:08:55.924 "trsvcid": "$NVMF_PORT", 00:08:55.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.924 "hdgst": ${hdgst:-false}, 00:08:55.924 "ddgst": ${ddgst:-false} 00:08:55.924 }, 00:08:55.924 "method": "bdev_nvme_attach_controller" 00:08:55.924 } 00:08:55.924 EOF 00:08:55.924 )") 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3648755 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.924 { 00:08:55.924 "params": { 00:08:55.924 "name": "Nvme$subsystem", 00:08:55.924 "trtype": "$TEST_TRANSPORT", 00:08:55.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.924 "adrfam": "ipv4", 00:08:55.924 "trsvcid": "$NVMF_PORT", 00:08:55.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.924 "hdgst": ${hdgst:-false}, 00:08:55.924 "ddgst": ${ddgst:-false} 00:08:55.924 }, 00:08:55.924 "method": "bdev_nvme_attach_controller" 00:08:55.924 } 00:08:55.924 EOF 00:08:55.924 )") 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.924 { 00:08:55.924 "params": { 00:08:55.924 "name": "Nvme$subsystem", 00:08:55.924 "trtype": "$TEST_TRANSPORT", 00:08:55.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.924 "adrfam": "ipv4", 00:08:55.924 "trsvcid": "$NVMF_PORT", 00:08:55.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.924 "hdgst": ${hdgst:-false}, 00:08:55.924 "ddgst": ${ddgst:-false} 00:08:55.924 }, 00:08:55.924 "method": "bdev_nvme_attach_controller" 00:08:55.924 } 00:08:55.924 EOF 00:08:55.924 )") 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3648748 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.924 "params": { 00:08:55.924 "name": "Nvme1", 00:08:55.924 "trtype": "tcp", 00:08:55.924 "traddr": "10.0.0.2", 00:08:55.924 "adrfam": "ipv4", 00:08:55.924 "trsvcid": "4420", 00:08:55.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.924 "hdgst": false, 00:08:55.924 "ddgst": false 00:08:55.924 }, 00:08:55.924 "method": "bdev_nvme_attach_controller" 00:08:55.924 }' 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.924 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.924 "params": { 00:08:55.924 "name": "Nvme1", 00:08:55.924 "trtype": "tcp", 00:08:55.924 "traddr": "10.0.0.2", 00:08:55.924 "adrfam": "ipv4", 00:08:55.924 "trsvcid": "4420", 00:08:55.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.924 "hdgst": false, 00:08:55.924 "ddgst": false 00:08:55.924 }, 00:08:55.924 "method": "bdev_nvme_attach_controller" 00:08:55.925 }' 00:08:55.925 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.925 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.925 "params": { 00:08:55.925 "name": "Nvme1", 00:08:55.925 "trtype": "tcp", 00:08:55.925 "traddr": "10.0.0.2", 00:08:55.925 "adrfam": "ipv4", 00:08:55.925 "trsvcid": "4420", 00:08:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.925 "hdgst": false, 00:08:55.925 "ddgst": false 00:08:55.925 }, 00:08:55.925 "method": "bdev_nvme_attach_controller" 00:08:55.925 }' 00:08:55.925 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.925 09:42:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.925 "params": { 00:08:55.925 "name": "Nvme1", 00:08:55.925 "trtype": "tcp", 00:08:55.925 "traddr": "10.0.0.2", 00:08:55.925 "adrfam": "ipv4", 00:08:55.925 "trsvcid": "4420", 00:08:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.925 "hdgst": false, 00:08:55.925 "ddgst": false 00:08:55.925 }, 00:08:55.925 "method": "bdev_nvme_attach_controller" 00:08:55.925 }' 00:08:56.183 [2024-11-20 09:42:32.840847] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:56.183 [2024-11-20 09:42:32.840847] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:56.183 [2024-11-20 09:42:32.840846] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:56.183 [2024-11-20 09:42:32.840847] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:08:56.183 [2024-11-20 09:42:32.840934] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:56.183 [2024-11-20 09:42:32.840933] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 09:42:32.840933] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 [2024-11-20 09:42:32.840945] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:56.183 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:56.183 --proc-type=auto ] 00:08:56.183 [2024-11-20 09:42:33.020864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.183 [2024-11-20 09:42:33.077811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:56.441 [2024-11-20 09:42:33.126880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.441 [2024-11-20 09:42:33.183114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:56.441 [2024-11-20 09:42:33.231773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.441 [2024-11-20 09:42:33.289343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:56.441 [2024-11-20 09:42:33.342282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.699 [2024-11-20 09:42:33.397188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:56.699 Running I/O for 1 seconds... 00:08:56.699 Running I/O for 1 seconds... 00:08:56.699 Running I/O for 1 seconds... 00:08:56.956 Running I/O for 1 seconds... 00:08:57.890 183696.00 IOPS, 717.56 MiB/s 00:08:57.890 Latency(us) 00:08:57.890 [2024-11-20T08:42:34.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.890 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:57.890 Nvme1n1 : 1.00 183345.19 716.19 0.00 0.00 694.22 292.79 1893.26 00:08:57.890 [2024-11-20T08:42:34.804Z] =================================================================================================================== 00:08:57.890 [2024-11-20T08:42:34.804Z] Total : 183345.19 716.19 0.00 0.00 694.22 292.79 1893.26 00:08:57.890 8720.00 IOPS, 34.06 MiB/s [2024-11-20T08:42:34.804Z] 8689.00 IOPS, 33.94 MiB/s 00:08:57.890 Latency(us) 00:08:57.890 [2024-11-20T08:42:34.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.890 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:57.890 Nvme1n1 : 1.01 8787.00 34.32 0.00 0.00 14505.11 5509.88 21554.06 00:08:57.890 [2024-11-20T08:42:34.804Z] =================================================================================================================== 00:08:57.890 [2024-11-20T08:42:34.804Z] Total : 8787.00 34.32 0.00 0.00 14505.11 5509.88 21554.06 00:08:57.890 00:08:57.890 Latency(us) 00:08:57.890 [2024-11-20T08:42:34.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.890 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:57.890 Nvme1n1 : 1.01 8737.04 34.13 0.00 0.00 14574.87 7815.77 23010.42 00:08:57.890 [2024-11-20T08:42:34.804Z] =================================================================================================================== 00:08:57.890 [2024-11-20T08:42:34.804Z] Total : 8737.04 34.13 0.00 0.00 14574.87 7815.77 23010.42 00:08:57.890 9204.00 IOPS, 35.95 MiB/s 00:08:57.890 Latency(us) 00:08:57.890 [2024-11-20T08:42:34.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.890 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:57.890 Nvme1n1 : 1.01 9276.88 36.24 0.00 0.00 13747.43 5145.79 25437.68 00:08:57.890 [2024-11-20T08:42:34.804Z] =================================================================================================================== 00:08:57.890 [2024-11-20T08:42:34.804Z] Total : 9276.88 36.24 0.00 0.00 13747.43 5145.79 25437.68 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3648750 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3648752 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3648755 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.148 rmmod nvme_tcp 00:08:58.148 rmmod nvme_fabrics 00:08:58.148 rmmod nvme_keyring 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3648718 ']' 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3648718 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3648718 ']' 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3648718 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3648718 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3648718' 00:08:58.148 killing process with pid 3648718 00:08:58.148 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3648718 00:08:58.149 09:42:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3648718 00:08:58.408 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.408 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.408 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.408 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:58.408 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:58.408 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.408 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.408 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.409 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.409 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.409 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.409 09:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.313 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.313 00:09:00.313 real 0m7.349s 00:09:00.313 user 0m16.222s 00:09:00.313 sys 0m3.790s 00:09:00.313 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.313 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.313 ************************************ 00:09:00.313 END TEST nvmf_bdev_io_wait 00:09:00.313 ************************************ 00:09:00.573 09:42:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.573 09:42:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.573 09:42:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.573 09:42:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.573 ************************************ 00:09:00.574 START TEST nvmf_queue_depth 00:09:00.574 ************************************ 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.574 * Looking for test storage... 00:09:00.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.574 --rc genhtml_branch_coverage=1 00:09:00.574 --rc genhtml_function_coverage=1 00:09:00.574 --rc genhtml_legend=1 00:09:00.574 --rc geninfo_all_blocks=1 00:09:00.574 --rc geninfo_unexecuted_blocks=1 00:09:00.574 00:09:00.574 ' 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.574 --rc genhtml_branch_coverage=1 00:09:00.574 --rc genhtml_function_coverage=1 00:09:00.574 --rc genhtml_legend=1 00:09:00.574 --rc geninfo_all_blocks=1 00:09:00.574 --rc geninfo_unexecuted_blocks=1 00:09:00.574 00:09:00.574 ' 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.574 --rc genhtml_branch_coverage=1 00:09:00.574 --rc genhtml_function_coverage=1 00:09:00.574 --rc genhtml_legend=1 00:09:00.574 --rc geninfo_all_blocks=1 00:09:00.574 --rc geninfo_unexecuted_blocks=1 00:09:00.574 00:09:00.574 ' 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.574 --rc genhtml_branch_coverage=1 00:09:00.574 --rc genhtml_function_coverage=1 00:09:00.574 --rc genhtml_legend=1 00:09:00.574 --rc geninfo_all_blocks=1 00:09:00.574 --rc geninfo_unexecuted_blocks=1 00:09:00.574 00:09:00.574 ' 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.574 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:00.575 09:42:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:03.156 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:03.156 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:03.156 Found net devices under 0000:09:00.0: cvl_0_0 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:03.156 Found net devices under 0000:09:00.1: cvl_0_1 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.156 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:09:03.157 00:09:03.157 --- 10.0.0.2 ping statistics --- 00:09:03.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.157 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:09:03.157 00:09:03.157 --- 10.0.0.1 ping statistics --- 00:09:03.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.157 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3650989 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3650989 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3650989 ']' 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.157 09:42:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 [2024-11-20 09:42:39.851852] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:09:03.157 [2024-11-20 09:42:39.851931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.157 [2024-11-20 09:42:39.929950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.157 [2024-11-20 09:42:39.988692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.157 [2024-11-20 09:42:39.988749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.157 [2024-11-20 09:42:39.988778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.157 [2024-11-20 09:42:39.988789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.157 [2024-11-20 09:42:39.988799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.157 [2024-11-20 09:42:39.989435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.414 [2024-11-20 09:42:40.137820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.414 Malloc0 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.414 [2024-11-20 09:42:40.185480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3651079 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3651079 /var/tmp/bdevperf.sock 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3651079 ']' 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.414 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.414 [2024-11-20 09:42:40.237146] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:09:03.414 [2024-11-20 09:42:40.237245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651079 ] 00:09:03.414 [2024-11-20 09:42:40.304485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.671 [2024-11-20 09:42:40.362532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.671 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.671 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.671 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.671 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.671 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.928 NVMe0n1 00:09:03.928 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.928 09:42:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.186 Running I/O for 10 seconds... 00:09:06.056 8192.00 IOPS, 32.00 MiB/s [2024-11-20T08:42:43.904Z] 8473.50 IOPS, 33.10 MiB/s [2024-11-20T08:42:45.278Z] 8530.33 IOPS, 33.32 MiB/s [2024-11-20T08:42:46.211Z] 8572.75 IOPS, 33.49 MiB/s [2024-11-20T08:42:47.145Z] 8595.80 IOPS, 33.58 MiB/s [2024-11-20T08:42:48.079Z] 8651.00 IOPS, 33.79 MiB/s [2024-11-20T08:42:49.013Z] 8623.71 IOPS, 33.69 MiB/s [2024-11-20T08:42:49.947Z] 8622.25 IOPS, 33.68 MiB/s [2024-11-20T08:42:51.320Z] 8641.56 IOPS, 33.76 MiB/s [2024-11-20T08:42:51.320Z] 8634.00 IOPS, 33.73 MiB/s 00:09:14.406 Latency(us) 00:09:14.406 [2024-11-20T08:42:51.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.407 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:14.407 Verification LBA range: start 0x0 length 0x4000 00:09:14.407 NVMe0n1 : 10.08 8668.98 33.86 0.00 0.00 117545.68 18058.81 69905.07 00:09:14.407 [2024-11-20T08:42:51.321Z] =================================================================================================================== 00:09:14.407 [2024-11-20T08:42:51.321Z] Total : 8668.98 33.86 0.00 0.00 117545.68 18058.81 69905.07 00:09:14.407 { 00:09:14.407 "results": [ 00:09:14.407 { 00:09:14.407 "job": "NVMe0n1", 00:09:14.407 "core_mask": "0x1", 00:09:14.407 "workload": "verify", 00:09:14.407 "status": "finished", 00:09:14.407 "verify_range": { 00:09:14.407 "start": 0, 00:09:14.407 "length": 16384 00:09:14.407 }, 00:09:14.407 "queue_depth": 1024, 00:09:14.407 "io_size": 4096, 00:09:14.407 "runtime": 10.077768, 00:09:14.407 "iops": 8668.983052596566, 00:09:14.407 "mibps": 33.86321504920534, 00:09:14.407 "io_failed": 0, 00:09:14.407 "io_timeout": 0, 00:09:14.407 "avg_latency_us": 117545.68297327317, 00:09:14.407 "min_latency_us": 18058.80888888889, 00:09:14.407 "max_latency_us": 69905.06666666667 00:09:14.407 } 00:09:14.407 ], 00:09:14.407 "core_count": 1 00:09:14.407 } 00:09:14.407 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3651079 00:09:14.407 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3651079 ']' 00:09:14.407 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3651079 00:09:14.407 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.407 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.407 09:42:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3651079 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3651079' 00:09:14.407 killing process with pid 3651079 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3651079 00:09:14.407 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.407 00:09:14.407 Latency(us) 00:09:14.407 [2024-11-20T08:42:51.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.407 [2024-11-20T08:42:51.321Z] =================================================================================================================== 00:09:14.407 [2024-11-20T08:42:51.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3651079 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.407 rmmod nvme_tcp 00:09:14.407 rmmod nvme_fabrics 00:09:14.407 rmmod nvme_keyring 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3650989 ']' 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3650989 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3650989 ']' 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3650989 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.407 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.665 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3650989 00:09:14.665 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.665 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.665 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3650989' 00:09:14.665 killing process with pid 3650989 00:09:14.665 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3650989 00:09:14.665 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3650989 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.925 09:42:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.835 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.835 00:09:16.835 real 0m16.385s 00:09:16.835 user 0m22.999s 00:09:16.835 sys 0m3.139s 00:09:16.835 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.835 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.835 ************************************ 00:09:16.835 END TEST nvmf_queue_depth 00:09:16.835 ************************************ 00:09:16.835 09:42:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.835 09:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.835 09:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.835 09:42:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.835 ************************************ 00:09:16.835 START TEST nvmf_target_multipath 00:09:16.835 ************************************ 00:09:16.835 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:17.094 * Looking for test storage... 00:09:17.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.094 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.095 --rc genhtml_branch_coverage=1 00:09:17.095 --rc genhtml_function_coverage=1 00:09:17.095 --rc genhtml_legend=1 00:09:17.095 --rc geninfo_all_blocks=1 00:09:17.095 --rc geninfo_unexecuted_blocks=1 00:09:17.095 00:09:17.095 ' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.095 --rc genhtml_branch_coverage=1 00:09:17.095 --rc genhtml_function_coverage=1 00:09:17.095 --rc genhtml_legend=1 00:09:17.095 --rc geninfo_all_blocks=1 00:09:17.095 --rc geninfo_unexecuted_blocks=1 00:09:17.095 00:09:17.095 ' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.095 --rc genhtml_branch_coverage=1 00:09:17.095 --rc genhtml_function_coverage=1 00:09:17.095 --rc genhtml_legend=1 00:09:17.095 --rc geninfo_all_blocks=1 00:09:17.095 --rc geninfo_unexecuted_blocks=1 00:09:17.095 00:09:17.095 ' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.095 --rc genhtml_branch_coverage=1 00:09:17.095 --rc genhtml_function_coverage=1 00:09:17.095 --rc genhtml_legend=1 00:09:17.095 --rc geninfo_all_blocks=1 00:09:17.095 --rc geninfo_unexecuted_blocks=1 00:09:17.095 00:09:17.095 ' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:17.095 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:17.096 09:42:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:19.631 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:19.632 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:19.632 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:19.632 Found net devices under 0000:09:00.0: cvl_0_0 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:19.632 Found net devices under 0000:09:00.1: cvl_0_1 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.632 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:19.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:09:19.633 00:09:19.633 --- 10.0.0.2 ping statistics --- 00:09:19.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.633 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:09:19.633 00:09:19.633 --- 10.0.0.1 ping statistics --- 00:09:19.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.633 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:19.633 only one NIC for nvmf test 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.633 rmmod nvme_tcp 00:09:19.633 rmmod nvme_fabrics 00:09:19.633 rmmod nvme_keyring 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.633 09:42:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.536 00:09:21.536 real 0m4.608s 00:09:21.536 user 0m0.941s 00:09:21.536 sys 0m1.677s 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:21.536 ************************************ 00:09:21.536 END TEST nvmf_target_multipath 00:09:21.536 ************************************ 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.536 09:42:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.536 ************************************ 00:09:21.536 START TEST nvmf_zcopy 00:09:21.536 ************************************ 00:09:21.537 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:21.537 * Looking for test storage... 00:09:21.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.537 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.537 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.537 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.795 --rc genhtml_branch_coverage=1 00:09:21.795 --rc genhtml_function_coverage=1 00:09:21.795 --rc genhtml_legend=1 00:09:21.795 --rc geninfo_all_blocks=1 00:09:21.795 --rc geninfo_unexecuted_blocks=1 00:09:21.795 00:09:21.795 ' 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.795 --rc genhtml_branch_coverage=1 00:09:21.795 --rc genhtml_function_coverage=1 00:09:21.795 --rc genhtml_legend=1 00:09:21.795 --rc geninfo_all_blocks=1 00:09:21.795 --rc geninfo_unexecuted_blocks=1 00:09:21.795 00:09:21.795 ' 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.795 --rc genhtml_branch_coverage=1 00:09:21.795 --rc genhtml_function_coverage=1 00:09:21.795 --rc genhtml_legend=1 00:09:21.795 --rc geninfo_all_blocks=1 00:09:21.795 --rc geninfo_unexecuted_blocks=1 00:09:21.795 00:09:21.795 ' 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.795 --rc genhtml_branch_coverage=1 00:09:21.795 --rc genhtml_function_coverage=1 00:09:21.795 --rc genhtml_legend=1 00:09:21.795 --rc geninfo_all_blocks=1 00:09:21.795 --rc geninfo_unexecuted_blocks=1 00:09:21.795 00:09:21.795 ' 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.795 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.796 09:42:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:24.328 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:24.328 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.328 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:24.329 Found net devices under 0000:09:00.0: cvl_0_0 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:24.329 Found net devices under 0000:09:00.1: cvl_0_1 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:09:24.329 00:09:24.329 --- 10.0.0.2 ping statistics --- 00:09:24.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.329 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:09:24.329 00:09:24.329 --- 10.0.0.1 ping statistics --- 00:09:24.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.329 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3656344 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3656344 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3656344 ']' 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.329 09:43:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.329 [2024-11-20 09:43:00.941129] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:09:24.329 [2024-11-20 09:43:00.941220] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.329 [2024-11-20 09:43:01.017023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.329 [2024-11-20 09:43:01.075709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.329 [2024-11-20 09:43:01.075765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.330 [2024-11-20 09:43:01.075794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.330 [2024-11-20 09:43:01.075806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.330 [2024-11-20 09:43:01.075816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.330 [2024-11-20 09:43:01.076470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.330 [2024-11-20 09:43:01.228452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.330 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.588 [2024-11-20 09:43:01.244698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.588 malloc0 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.588 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.588 { 00:09:24.588 "params": { 00:09:24.588 "name": "Nvme$subsystem", 00:09:24.588 "trtype": "$TEST_TRANSPORT", 00:09:24.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.588 "adrfam": "ipv4", 00:09:24.588 "trsvcid": "$NVMF_PORT", 00:09:24.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.588 "hdgst": ${hdgst:-false}, 00:09:24.588 "ddgst": ${ddgst:-false} 00:09:24.588 }, 00:09:24.589 "method": "bdev_nvme_attach_controller" 00:09:24.589 } 00:09:24.589 EOF 00:09:24.589 )") 00:09:24.589 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:24.589 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:24.589 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:24.589 09:43:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.589 "params": { 00:09:24.589 "name": "Nvme1", 00:09:24.589 "trtype": "tcp", 00:09:24.589 "traddr": "10.0.0.2", 00:09:24.589 "adrfam": "ipv4", 00:09:24.589 "trsvcid": "4420", 00:09:24.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.589 "hdgst": false, 00:09:24.589 "ddgst": false 00:09:24.589 }, 00:09:24.589 "method": "bdev_nvme_attach_controller" 00:09:24.589 }' 00:09:24.589 [2024-11-20 09:43:01.330734] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:09:24.589 [2024-11-20 09:43:01.330804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656376 ] 00:09:24.589 [2024-11-20 09:43:01.396634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.589 [2024-11-20 09:43:01.456916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.155 Running I/O for 10 seconds... 00:09:27.022 5485.00 IOPS, 42.85 MiB/s [2024-11-20T08:43:04.870Z] 5546.00 IOPS, 43.33 MiB/s [2024-11-20T08:43:05.803Z] 5547.00 IOPS, 43.34 MiB/s [2024-11-20T08:43:07.177Z] 5561.50 IOPS, 43.45 MiB/s [2024-11-20T08:43:08.110Z] 5571.00 IOPS, 43.52 MiB/s [2024-11-20T08:43:09.044Z] 5577.50 IOPS, 43.57 MiB/s [2024-11-20T08:43:09.978Z] 5581.14 IOPS, 43.60 MiB/s [2024-11-20T08:43:10.968Z] 5585.75 IOPS, 43.64 MiB/s [2024-11-20T08:43:11.902Z] 5588.22 IOPS, 43.66 MiB/s [2024-11-20T08:43:11.902Z] 5591.00 IOPS, 43.68 MiB/s 00:09:34.988 Latency(us) 00:09:34.988 [2024-11-20T08:43:11.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.988 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:34.988 Verification LBA range: start 0x0 length 0x1000 00:09:34.988 Nvme1n1 : 10.02 5595.20 43.71 0.00 0.00 22816.41 2985.53 32039.82 00:09:34.988 [2024-11-20T08:43:11.902Z] =================================================================================================================== 00:09:34.988 [2024-11-20T08:43:11.902Z] Total : 5595.20 43.71 0.00 0.00 22816.41 2985.53 32039.82 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3657572 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:35.247 { 00:09:35.247 "params": { 00:09:35.247 "name": "Nvme$subsystem", 00:09:35.247 "trtype": "$TEST_TRANSPORT", 00:09:35.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.247 "adrfam": "ipv4", 00:09:35.247 "trsvcid": "$NVMF_PORT", 00:09:35.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.247 "hdgst": ${hdgst:-false}, 00:09:35.247 "ddgst": ${ddgst:-false} 00:09:35.247 }, 00:09:35.247 "method": "bdev_nvme_attach_controller" 00:09:35.247 } 00:09:35.247 EOF 00:09:35.247 )") 00:09:35.247 [2024-11-20 09:43:12.036896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.036934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:35.247 09:43:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:35.247 "params": { 00:09:35.247 "name": "Nvme1", 00:09:35.247 "trtype": "tcp", 00:09:35.247 "traddr": "10.0.0.2", 00:09:35.247 "adrfam": "ipv4", 00:09:35.247 "trsvcid": "4420", 00:09:35.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.247 "hdgst": false, 00:09:35.247 "ddgst": false 00:09:35.247 }, 00:09:35.247 "method": "bdev_nvme_attach_controller" 00:09:35.247 }' 00:09:35.247 [2024-11-20 09:43:12.044865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.044902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.052887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.052908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.060906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.060926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.068928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.068948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.076978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.077003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.079552] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:09:35.247 [2024-11-20 09:43:12.079628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657572 ] 00:09:35.247 [2024-11-20 09:43:12.084970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.084990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.092990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.093009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.101011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.101042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.109034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.109053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.117054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.117073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.125076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.125095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.133097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.133117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.141117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.141137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.148104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.247 [2024-11-20 09:43:12.149139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.149158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.247 [2024-11-20 09:43:12.157189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.247 [2024-11-20 09:43:12.157219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.505 [2024-11-20 09:43:12.165211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.505 [2024-11-20 09:43:12.165242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.505 [2024-11-20 09:43:12.173223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.505 [2024-11-20 09:43:12.173245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.505 [2024-11-20 09:43:12.181230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.505 [2024-11-20 09:43:12.181250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.505 [2024-11-20 09:43:12.189252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.189271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.197275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.197326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.205321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.205343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.211018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.506 [2024-11-20 09:43:12.213346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.213370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.221382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.221403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.229401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.229429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.237442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.237474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.245478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.245510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.253483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.253516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.261505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.261536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.269525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.269558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.277566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.277598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.285547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.285568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.293603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.293633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.301626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.301672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.309637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.309675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.317666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.317686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.325673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.325693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.333727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.333753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.341751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.341779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.349755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.349779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.357777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.357801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.365793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.365815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.373830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.373851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.381837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.381857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.389860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.389881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.397866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.397888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.405903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.405936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.506 [2024-11-20 09:43:12.413914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.506 [2024-11-20 09:43:12.413940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.421943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.421964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.429968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.429990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.437987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.438008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.446010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.446031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.454038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.454062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.462057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.462079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.470065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.470100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.478088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.478123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.486109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.486144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.494133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.494154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.502155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.502191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.510175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.510209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.518197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.518232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.526218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.526237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.534247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.534281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.542261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.542296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.550298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.550334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.558328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.558377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 Running I/O for 5 seconds... 00:09:35.765 [2024-11-20 09:43:12.566349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.566381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.579267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.579296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.589514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.589543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.600654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.600681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.611530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.765 [2024-11-20 09:43:12.611558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.765 [2024-11-20 09:43:12.622337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.766 [2024-11-20 09:43:12.622365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.766 [2024-11-20 09:43:12.634826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.766 [2024-11-20 09:43:12.634854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.766 [2024-11-20 09:43:12.645367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.766 [2024-11-20 09:43:12.645395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.766 [2024-11-20 09:43:12.656203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.766 [2024-11-20 09:43:12.656231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.766 [2024-11-20 09:43:12.668948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.766 [2024-11-20 09:43:12.668975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.680441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.680469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.689378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.689406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.701086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.701115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.712142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.712170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.722978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.723006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.735368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.735396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.745497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.745526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.756042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.756070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.766769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.766796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.777504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.777531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.789982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.790011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.799443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.799470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.810937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.810964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.823516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.823544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.833196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.833224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.023 [2024-11-20 09:43:12.844115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.023 [2024-11-20 09:43:12.844143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.024 [2024-11-20 09:43:12.857609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.024 [2024-11-20 09:43:12.857637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.024 [2024-11-20 09:43:12.867833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.024 [2024-11-20 09:43:12.867861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.024 [2024-11-20 09:43:12.878194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.024 [2024-11-20 09:43:12.878222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.024 [2024-11-20 09:43:12.888843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.024 [2024-11-20 09:43:12.888893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.024 [2024-11-20 09:43:12.901572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.024 [2024-11-20 09:43:12.901600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.024 [2024-11-20 09:43:12.913627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.024 [2024-11-20 09:43:12.913654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.024 [2024-11-20 09:43:12.923386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.024 [2024-11-20 09:43:12.923413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.024 [2024-11-20 09:43:12.934030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.024 [2024-11-20 09:43:12.934058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:12.944718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:12.944746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:12.957163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:12.957191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:12.966278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:12.966316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:12.977399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:12.977427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:12.988120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:12.988148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.000738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.000766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.010989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.011017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.021330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.021358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.031724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.031752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.042536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.042565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.053131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.053160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.065479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.065507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.074966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.074994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.085793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.085821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.096529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.096565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.108760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.108788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.118068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.118095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.129420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.129447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.140502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.140530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.151233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.151261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.161611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.161638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.171896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.171923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.283 [2024-11-20 09:43:13.182683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.283 [2024-11-20 09:43:13.182712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.195566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.195594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.205620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.205649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.215971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.216000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.226311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.226339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.237443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.237471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.248176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.248204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.259366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.259394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.272221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.272249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.282604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.282632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.293214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.293243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.304101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.304136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.314588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.314616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.325171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.325199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.335541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.335568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.346162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.346190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.356961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.356988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.367975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.368003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.378699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.378727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.389232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.389260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.541 [2024-11-20 09:43:13.402334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.541 [2024-11-20 09:43:13.402363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.542 [2024-11-20 09:43:13.413994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.542 [2024-11-20 09:43:13.414021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.542 [2024-11-20 09:43:13.422432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.542 [2024-11-20 09:43:13.422460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.542 [2024-11-20 09:43:13.435429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.542 [2024-11-20 09:43:13.435458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.542 [2024-11-20 09:43:13.447261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.542 [2024-11-20 09:43:13.447289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.799 [2024-11-20 09:43:13.455936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.799 [2024-11-20 09:43:13.455964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.799 [2024-11-20 09:43:13.467530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.799 [2024-11-20 09:43:13.467558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.799 [2024-11-20 09:43:13.477728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.799 [2024-11-20 09:43:13.477756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.488281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.488320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.498767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.498796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.509475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.509510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.520319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.520347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.533155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.533184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.543066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.543094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.553487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.553514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.564133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.564162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 11859.00 IOPS, 92.65 MiB/s [2024-11-20T08:43:13.714Z] [2024-11-20 09:43:13.574967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.574995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.585545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.585572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.598478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.598505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.609015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.609043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.619971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.619998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.631006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.631033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.641767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.641794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.654137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.654165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.663567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.663594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.674380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.674407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.685233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.685260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.697434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.697461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.800 [2024-11-20 09:43:13.706685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.800 [2024-11-20 09:43:13.706713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.718085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.718114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.728676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.728704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.739431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.739458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.751996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.752024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.762137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.762165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.773058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.773085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.785780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.785807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.796100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.796129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.806593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.806621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.817005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.817032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.827551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.827578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.838022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.838050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.848459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.848486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.859422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.859449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.870099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.870126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.880646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.880673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.058 [2024-11-20 09:43:13.893138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.058 [2024-11-20 09:43:13.893165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.059 [2024-11-20 09:43:13.903027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.059 [2024-11-20 09:43:13.903055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.059 [2024-11-20 09:43:13.913268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.059 [2024-11-20 09:43:13.913296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.059 [2024-11-20 09:43:13.923747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.059 [2024-11-20 09:43:13.923774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.059 [2024-11-20 09:43:13.934450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.059 [2024-11-20 09:43:13.934477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.059 [2024-11-20 09:43:13.944891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.059 [2024-11-20 09:43:13.944919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.059 [2024-11-20 09:43:13.955527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.059 [2024-11-20 09:43:13.955554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.059 [2024-11-20 09:43:13.965907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.059 [2024-11-20 09:43:13.965934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:13.976703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:13.976730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:13.987649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:13.987676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:13.998402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:13.998429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.009268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.009295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.022413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.022440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.032667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.032695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.043287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.043325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.053893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.053921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.064490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.064518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.075093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.075120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.085664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.085691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.096428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.096456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.106943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.106971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.117721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.117748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.128405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.128433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.138988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.139015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.149316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.149344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.160146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.160173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.170612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.170640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.181211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.181238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.193531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.193558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.203193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.203222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.213647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.213675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.317 [2024-11-20 09:43:14.224653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.317 [2024-11-20 09:43:14.224680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.235637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.235664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.246612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.246640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.259738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.259766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.269805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.269832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.280191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.280218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.291271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.291300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.303572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.303600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.313621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.313649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.323953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.323989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.334944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.334974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.347725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.347754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.358034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.358063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.368926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.368955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.379592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.379620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.390338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.390367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.401055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.401083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.411641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.411670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.422381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.422411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.433006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.433034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.445692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.445720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.457393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.457423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.466325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.466353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.576 [2024-11-20 09:43:14.477708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.576 [2024-11-20 09:43:14.477751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.490359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.490387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.500519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.500547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.511419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.511447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.524161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.524189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.535842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.535878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.544983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.545010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.556692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.556721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.567382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.567410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 11898.50 IOPS, 92.96 MiB/s [2024-11-20T08:43:14.749Z] [2024-11-20 09:43:14.578402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.578429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.589190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.589218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.600196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.600224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.613647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.613676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.625460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.625487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.634358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.634385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.645716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.645744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.658516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.658544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.670363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.670405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.679333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.679360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.690747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.690775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.703480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.703509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.713163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.713190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.724076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.724105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.734730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.734758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.835 [2024-11-20 09:43:14.745616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.835 [2024-11-20 09:43:14.745651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.758042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.758070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.767798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.767826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.779336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.779365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.790329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.790366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.801053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.801081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.813571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.813600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.822740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.822768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.834151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.834179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.846600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.846627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.856808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.856837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.867749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.867776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.880026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.880053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.889793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.889821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.900825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.900853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.911087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.911114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.921497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.921524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.932699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.932726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.943622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.943650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.956168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.956197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.965883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.965910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.976107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.976134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.986481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.986509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.094 [2024-11-20 09:43:14.996900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.094 [2024-11-20 09:43:14.996928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.353 [2024-11-20 09:43:15.007230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.353 [2024-11-20 09:43:15.007257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.353 [2024-11-20 09:43:15.017565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.017592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.028166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.028194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.038632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.038661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.049296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.049332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.059582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.059609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.070258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.070285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.080570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.080598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.091580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.091608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.102518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.102547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.115174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.115202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.125299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.125335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.135712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.135740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.146178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.146206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.156910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.156938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.167006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.167034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.177047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.177074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.187471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.187498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.197668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.197695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.208412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.208440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.221123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.221152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.231290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.231326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.242170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.242197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.254540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.254568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.354 [2024-11-20 09:43:15.264769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.354 [2024-11-20 09:43:15.264797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.612 [2024-11-20 09:43:15.275410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.612 [2024-11-20 09:43:15.275438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.612 [2024-11-20 09:43:15.285967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.612 [2024-11-20 09:43:15.285995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.612 [2024-11-20 09:43:15.296492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.612 [2024-11-20 09:43:15.296520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.612 [2024-11-20 09:43:15.308675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.612 [2024-11-20 09:43:15.308703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.612 [2024-11-20 09:43:15.318852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.612 [2024-11-20 09:43:15.318895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.612 [2024-11-20 09:43:15.329291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.612 [2024-11-20 09:43:15.329343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.612 [2024-11-20 09:43:15.339685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.612 [2024-11-20 09:43:15.339713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.612 [2024-11-20 09:43:15.350464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.612 [2024-11-20 09:43:15.350493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.361230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.361258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.371454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.371483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.382085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.382113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.395096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.395139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.405225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.405252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.415424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.415452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.425641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.425668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.436213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.436241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.446681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.446709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.457124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.457165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.467458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.467487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.478072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.478100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.488764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.488793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.499051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.499079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.509499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.509526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.613 [2024-11-20 09:43:15.519933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.613 [2024-11-20 09:43:15.519961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.530450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.530479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.541246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.541274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.551848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.551884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.562760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.562803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.573676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.573721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 11938.67 IOPS, 93.27 MiB/s [2024-11-20T08:43:15.785Z] [2024-11-20 09:43:15.586946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.586975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.597617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.597645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.608156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.608184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.620300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.620336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.630441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.630469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.641205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.641233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.652134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.652162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.662573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.662600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.673198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.673226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.683925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.683952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.696614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.696642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.706985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.707013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.717777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.717804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.730167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.730195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.741935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.741963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.751748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.751777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.762094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.762135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.772432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.772460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.871 [2024-11-20 09:43:15.782817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.871 [2024-11-20 09:43:15.782845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.793045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.793074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.803636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.803664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.814322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.814349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.826912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.826940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.836953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.836980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.847426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.847454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.858531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.858558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.869333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.869369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.882410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.882438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.894387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.894416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.903282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.903319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.915103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.915145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.925863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.925890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.936784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.936812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.948979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.949007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.958904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.958931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.969910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.969945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.982524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.982552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:15.992593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:15.992621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:16.002966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:16.002994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:16.013660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:16.013687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:16.024321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:16.024349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.130 [2024-11-20 09:43:16.035224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.130 [2024-11-20 09:43:16.035252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.047943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.047972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.059483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.059511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.067993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.068021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.081098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.081127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.091195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.091223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.101568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.101597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.112218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.112246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.122923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.122951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.135582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.135610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.145689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.145717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.156084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.156111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.166101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.166129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.176395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.176423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.186517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.186545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.196710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.388 [2024-11-20 09:43:16.196738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.388 [2024-11-20 09:43:16.207072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.207101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.389 [2024-11-20 09:43:16.217528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.217556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.389 [2024-11-20 09:43:16.227925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.227953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.389 [2024-11-20 09:43:16.239068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.239096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.389 [2024-11-20 09:43:16.249596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.249624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.389 [2024-11-20 09:43:16.260569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.260597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.389 [2024-11-20 09:43:16.270962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.270990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.389 [2024-11-20 09:43:16.281780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.281808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.389 [2024-11-20 09:43:16.292353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.389 [2024-11-20 09:43:16.292382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.305196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.305223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.317235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.317263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.326168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.326196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.337566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.337594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.350064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.350093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.359226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.359253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.372084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.372112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.382447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.382474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.392842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.392870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.402932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.402960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.413410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.413438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.424069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.424097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.434841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.434869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.447214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.447241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.457270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.457298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.467781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.467809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.477655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.477683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.488704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.488731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.501667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.501695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.511973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.512001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.522395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.522423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.532860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.532887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.543532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.543559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.647 [2024-11-20 09:43:16.554356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.647 [2024-11-20 09:43:16.554385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.565072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.565101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.575774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.575802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 11957.75 IOPS, 93.42 MiB/s [2024-11-20T08:43:16.820Z] [2024-11-20 09:43:16.586582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.586611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.599261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.599290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.609604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.609633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.620400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.620429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.632856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.632885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.642718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.642747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.653807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.653836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.666526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.666554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.676606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.676657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.687276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.687314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.700234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.906 [2024-11-20 09:43:16.700262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.906 [2024-11-20 09:43:16.710343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.710371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.720830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.720858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.731321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.731348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.742067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.742094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.754915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.754942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.766544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.766572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.775969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.775997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.787281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.787325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.800017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.800045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.907 [2024-11-20 09:43:16.811747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.907 [2024-11-20 09:43:16.811775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.820852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.820880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.832352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.832380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.843227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.843256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.853836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.853864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.866104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.866132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.875231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.875259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.888407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.888435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.900453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.900494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.910085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.910113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.921199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.921227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.932225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.932253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.943135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.943163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.955654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.955682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.965339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.965368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.977519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.977548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.988047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.988075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:16.999009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:16.999045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:17.011428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:17.011467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:17.021499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:17.021527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:17.031988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:17.032017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:17.042910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.165 [2024-11-20 09:43:17.042940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.165 [2024-11-20 09:43:17.053200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.166 [2024-11-20 09:43:17.053229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.166 [2024-11-20 09:43:17.063024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.166 [2024-11-20 09:43:17.063053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.166 [2024-11-20 09:43:17.073379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.166 [2024-11-20 09:43:17.073407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.423 [2024-11-20 09:43:17.084088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.423 [2024-11-20 09:43:17.084117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.423 [2024-11-20 09:43:17.094821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.423 [2024-11-20 09:43:17.094849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.423 [2024-11-20 09:43:17.105025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.423 [2024-11-20 09:43:17.105053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.423 [2024-11-20 09:43:17.115467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.423 [2024-11-20 09:43:17.115496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.423 [2024-11-20 09:43:17.125943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.423 [2024-11-20 09:43:17.125972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.423 [2024-11-20 09:43:17.136541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.423 [2024-11-20 09:43:17.136568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.423 [2024-11-20 09:43:17.146873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.423 [2024-11-20 09:43:17.146900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.423 [2024-11-20 09:43:17.157460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.423 [2024-11-20 09:43:17.157488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.170407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.170435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.180090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.180118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.195502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.195531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.205761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.205799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.216377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.216405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.226868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.226896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.237739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.237767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.248620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.248649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.261397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.261425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.271526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.271553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.282650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.282678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.293619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.293648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.304522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.304550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.315649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.315676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.424 [2024-11-20 09:43:17.325901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.424 [2024-11-20 09:43:17.325929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.336408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.336436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.346897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.346926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.357728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.357756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.368510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.368537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.380896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.380924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.392583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.392611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.402754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.402782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.413894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.413931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.424507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.424534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.435344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.435373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.447913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.447941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.458052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.458079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.468516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.468544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.479343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.479370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.492023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.492050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.502427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.502454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.513375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.513403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.524126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.682 [2024-11-20 09:43:17.524153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.682 [2024-11-20 09:43:17.534820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.683 [2024-11-20 09:43:17.534847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.683 [2024-11-20 09:43:17.547707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.683 [2024-11-20 09:43:17.547735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.683 [2024-11-20 09:43:17.558000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.683 [2024-11-20 09:43:17.558027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.683 [2024-11-20 09:43:17.569196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.683 [2024-11-20 09:43:17.569224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.683 [2024-11-20 09:43:17.579271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.683 [2024-11-20 09:43:17.579322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.683 11949.60 IOPS, 93.36 MiB/s [2024-11-20T08:43:17.597Z] [2024-11-20 09:43:17.588620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.683 [2024-11-20 09:43:17.588648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.683 00:09:40.683 Latency(us) 00:09:40.683 [2024-11-20T08:43:17.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.683 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:40.683 Nvme1n1 : 5.01 11950.21 93.36 0.00 0.00 10697.55 4514.70 19612.25 00:09:40.683 [2024-11-20T08:43:17.597Z] =================================================================================================================== 00:09:40.683 [2024-11-20T08:43:17.597Z] Total : 11950.21 93.36 0.00 0.00 10697.55 4514.70 19612.25 00:09:40.683 [2024-11-20 09:43:17.594015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.683 [2024-11-20 09:43:17.594040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.602012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.602035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.610029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.610049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.618099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.618140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.626117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.626160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.634142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.634182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.642162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.642201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.650180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.650220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.658203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.658246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.666224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.666264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.674246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.674286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.682266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.682315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.690288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.690335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.698317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.698357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.706338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.706378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.714356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.714396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.722379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.722418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.730402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.730442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.738430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.738466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.746404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.746426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.754425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.754447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.762445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.762468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.770470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.770492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.778530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.778572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.786552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.786591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.794547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.794572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.942 [2024-11-20 09:43:17.802557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.942 [2024-11-20 09:43:17.802578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.943 [2024-11-20 09:43:17.810582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.943 [2024-11-20 09:43:17.810604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3657572) - No such process 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3657572 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 delay0 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.943 09:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:41.201 [2024-11-20 09:43:17.891000] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:47.754 Initializing NVMe Controllers 00:09:47.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:47.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:47.754 Initialization complete. Launching workers. 00:09:47.754 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:09:47.754 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 33 00:09:47.754 success 240, unsuccessful 156, failed 0 00:09:47.754 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:47.754 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:47.754 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.754 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:47.754 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.754 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:47.754 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.754 09:43:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.754 rmmod nvme_tcp 00:09:47.754 rmmod nvme_fabrics 00:09:47.754 rmmod nvme_keyring 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3656344 ']' 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3656344 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3656344 ']' 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3656344 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3656344 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3656344' 00:09:47.754 killing process with pid 3656344 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3656344 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3656344 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.754 09:43:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.657 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.657 00:09:49.657 real 0m27.984s 00:09:49.657 user 0m41.378s 00:09:49.657 sys 0m8.143s 00:09:49.657 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.657 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.657 ************************************ 00:09:49.657 END TEST nvmf_zcopy 00:09:49.657 ************************************ 00:09:49.657 09:43:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:49.657 09:43:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.657 09:43:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.657 09:43:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.657 ************************************ 00:09:49.657 START TEST nvmf_nmic 00:09:49.657 ************************************ 00:09:49.657 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:49.657 * Looking for test storage... 00:09:49.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.658 --rc genhtml_branch_coverage=1 00:09:49.658 --rc genhtml_function_coverage=1 00:09:49.658 --rc genhtml_legend=1 00:09:49.658 --rc geninfo_all_blocks=1 00:09:49.658 --rc geninfo_unexecuted_blocks=1 00:09:49.658 00:09:49.658 ' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.658 --rc genhtml_branch_coverage=1 00:09:49.658 --rc genhtml_function_coverage=1 00:09:49.658 --rc genhtml_legend=1 00:09:49.658 --rc geninfo_all_blocks=1 00:09:49.658 --rc geninfo_unexecuted_blocks=1 00:09:49.658 00:09:49.658 ' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.658 --rc genhtml_branch_coverage=1 00:09:49.658 --rc genhtml_function_coverage=1 00:09:49.658 --rc genhtml_legend=1 00:09:49.658 --rc geninfo_all_blocks=1 00:09:49.658 --rc geninfo_unexecuted_blocks=1 00:09:49.658 00:09:49.658 ' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.658 --rc genhtml_branch_coverage=1 00:09:49.658 --rc genhtml_function_coverage=1 00:09:49.658 --rc genhtml_legend=1 00:09:49.658 --rc geninfo_all_blocks=1 00:09:49.658 --rc geninfo_unexecuted_blocks=1 00:09:49.658 00:09:49.658 ' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.658 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.659 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.917 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:49.917 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:49.917 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.917 09:43:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:52.452 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:52.452 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:52.452 Found net devices under 0000:09:00.0: cvl_0_0 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:52.452 Found net devices under 0000:09:00.1: cvl_0_1 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.452 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:09:52.453 00:09:52.453 --- 10.0.0.2 ping statistics --- 00:09:52.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.453 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:09:52.453 00:09:52.453 --- 10.0.0.1 ping statistics --- 00:09:52.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.453 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3660978 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3660978 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3660978 ']' 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.453 09:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 [2024-11-20 09:43:29.018412] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:09:52.453 [2024-11-20 09:43:29.018509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.453 [2024-11-20 09:43:29.091519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.453 [2024-11-20 09:43:29.151440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.453 [2024-11-20 09:43:29.151488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.453 [2024-11-20 09:43:29.151503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.453 [2024-11-20 09:43:29.151516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.453 [2024-11-20 09:43:29.151526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.453 [2024-11-20 09:43:29.153329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.453 [2024-11-20 09:43:29.153359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.453 [2024-11-20 09:43:29.153407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.453 [2024-11-20 09:43:29.153411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 [2024-11-20 09:43:29.305165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 Malloc0 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 [2024-11-20 09:43:29.376959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:52.712 test case1: single bdev can't be used in multiple subsystems 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 [2024-11-20 09:43:29.400797] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:52.712 [2024-11-20 09:43:29.400826] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:52.712 [2024-11-20 09:43:29.400855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.712 request: 00:09:52.712 { 00:09:52.712 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:52.712 "namespace": { 00:09:52.712 "bdev_name": "Malloc0", 00:09:52.712 "no_auto_visible": false 00:09:52.712 }, 00:09:52.712 "method": "nvmf_subsystem_add_ns", 00:09:52.712 "req_id": 1 00:09:52.712 } 00:09:52.712 Got JSON-RPC error response 00:09:52.712 response: 00:09:52.712 { 00:09:52.712 "code": -32602, 00:09:52.712 "message": "Invalid parameters" 00:09:52.712 } 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:52.712 Adding namespace failed - expected result. 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:52.712 test case2: host connect to nvmf target in multiple paths 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 [2024-11-20 09:43:29.408931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.712 09:43:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.277 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:54.209 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.210 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:54.210 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.210 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:54.210 09:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:56.114 09:43:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:56.114 09:43:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:56.114 09:43:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.114 09:43:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:56.114 09:43:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.114 09:43:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:56.114 09:43:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:56.114 [global] 00:09:56.114 thread=1 00:09:56.114 invalidate=1 00:09:56.114 rw=write 00:09:56.114 time_based=1 00:09:56.114 runtime=1 00:09:56.114 ioengine=libaio 00:09:56.114 direct=1 00:09:56.114 bs=4096 00:09:56.114 iodepth=1 00:09:56.114 norandommap=0 00:09:56.114 numjobs=1 00:09:56.114 00:09:56.114 verify_dump=1 00:09:56.114 verify_backlog=512 00:09:56.114 verify_state_save=0 00:09:56.114 do_verify=1 00:09:56.114 verify=crc32c-intel 00:09:56.114 [job0] 00:09:56.114 filename=/dev/nvme0n1 00:09:56.114 Could not set queue depth (nvme0n1) 00:09:56.371 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.371 fio-3.35 00:09:56.371 Starting 1 thread 00:09:57.304 00:09:57.304 job0: (groupid=0, jobs=1): err= 0: pid=3661609: Wed Nov 20 09:43:34 2024 00:09:57.304 read: IOPS=133, BW=534KiB/s (547kB/s)(540KiB/1011msec) 00:09:57.304 slat (nsec): min=7621, max=64217, avg=17016.67, stdev=8798.12 00:09:57.304 clat (usec): min=199, max=41061, avg=6582.99, stdev=14803.71 00:09:57.304 lat (usec): min=218, max=41079, avg=6600.01, stdev=14808.21 00:09:57.304 clat percentiles (usec): 00:09:57.304 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:09:57.304 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 247], 00:09:57.304 | 70.00th=[ 258], 80.00th=[ 433], 90.00th=[41157], 95.00th=[41157], 00:09:57.304 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:57.304 | 99.99th=[41157] 00:09:57.304 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:57.304 slat (usec): min=7, max=28746, avg=70.73, stdev=1269.79 00:09:57.304 clat (usec): min=130, max=242, avg=157.55, stdev=12.78 00:09:57.304 lat (usec): min=139, max=28945, avg=228.28, stdev=1271.71 00:09:57.304 clat percentiles (usec): 00:09:57.304 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:09:57.304 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:09:57.304 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:09:57.304 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 243], 99.95th=[ 243], 00:09:57.304 | 99.99th=[ 243] 00:09:57.304 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.304 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.304 lat (usec) : 250=92.27%, 500=4.02%, 750=0.46% 00:09:57.304 lat (msec) : 50=3.25% 00:09:57.304 cpu : usr=0.50%, sys=0.89%, ctx=651, majf=0, minf=1 00:09:57.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.304 issued rwts: total=135,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.304 00:09:57.304 Run status group 0 (all jobs): 00:09:57.304 READ: bw=534KiB/s (547kB/s), 534KiB/s-534KiB/s (547kB/s-547kB/s), io=540KiB (553kB), run=1011-1011msec 00:09:57.304 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:09:57.304 00:09:57.304 Disk stats (read/write): 00:09:57.304 nvme0n1: ios=158/512, merge=0/0, ticks=1750/76, in_queue=1826, util=98.70% 00:09:57.304 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.562 rmmod nvme_tcp 00:09:57.562 rmmod nvme_fabrics 00:09:57.562 rmmod nvme_keyring 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3660978 ']' 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3660978 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3660978 ']' 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3660978 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660978 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660978' 00:09:57.562 killing process with pid 3660978 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3660978 00:09:57.562 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3660978 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.821 09:43:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.360 00:10:00.360 real 0m10.353s 00:10:00.360 user 0m23.173s 00:10:00.360 sys 0m2.556s 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.360 ************************************ 00:10:00.360 END TEST nvmf_nmic 00:10:00.360 ************************************ 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.360 ************************************ 00:10:00.360 START TEST nvmf_fio_target 00:10:00.360 ************************************ 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:00.360 * Looking for test storage... 00:10:00.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:00.360 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.361 --rc genhtml_branch_coverage=1 00:10:00.361 --rc genhtml_function_coverage=1 00:10:00.361 --rc genhtml_legend=1 00:10:00.361 --rc geninfo_all_blocks=1 00:10:00.361 --rc geninfo_unexecuted_blocks=1 00:10:00.361 00:10:00.361 ' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.361 --rc genhtml_branch_coverage=1 00:10:00.361 --rc genhtml_function_coverage=1 00:10:00.361 --rc genhtml_legend=1 00:10:00.361 --rc geninfo_all_blocks=1 00:10:00.361 --rc geninfo_unexecuted_blocks=1 00:10:00.361 00:10:00.361 ' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.361 --rc genhtml_branch_coverage=1 00:10:00.361 --rc genhtml_function_coverage=1 00:10:00.361 --rc genhtml_legend=1 00:10:00.361 --rc geninfo_all_blocks=1 00:10:00.361 --rc geninfo_unexecuted_blocks=1 00:10:00.361 00:10:00.361 ' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.361 --rc genhtml_branch_coverage=1 00:10:00.361 --rc genhtml_function_coverage=1 00:10:00.361 --rc genhtml_legend=1 00:10:00.361 --rc geninfo_all_blocks=1 00:10:00.361 --rc geninfo_unexecuted_blocks=1 00:10:00.361 00:10:00.361 ' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.361 09:43:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:02.266 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:02.266 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:02.267 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:02.267 Found net devices under 0000:09:00.0: cvl_0_0 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:02.267 Found net devices under 0000:09:00.1: cvl_0_1 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.267 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:10:02.525 00:10:02.525 --- 10.0.0.2 ping statistics --- 00:10:02.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.525 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:10:02.525 00:10:02.525 --- 10.0.0.1 ping statistics --- 00:10:02.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.525 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3663701 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3663701 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3663701 ']' 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.525 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.525 [2024-11-20 09:43:39.373632] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:10:02.525 [2024-11-20 09:43:39.373729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.783 [2024-11-20 09:43:39.449337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.783 [2024-11-20 09:43:39.508897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.783 [2024-11-20 09:43:39.508939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.783 [2024-11-20 09:43:39.508967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.783 [2024-11-20 09:43:39.508977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.783 [2024-11-20 09:43:39.508986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.783 [2024-11-20 09:43:39.510534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.783 [2024-11-20 09:43:39.510560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.783 [2024-11-20 09:43:39.510632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.783 [2024-11-20 09:43:39.510636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.783 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.783 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:02.783 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.783 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.783 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.783 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.783 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:03.041 [2024-11-20 09:43:39.894960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.041 09:43:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.641 09:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:03.641 09:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.923 09:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:03.923 09:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.181 09:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:04.181 09:43:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.439 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:04.439 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:04.697 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.954 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:04.954 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.212 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:05.212 09:43:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.470 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:05.470 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:05.727 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.985 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:05.985 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.242 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:06.242 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:06.500 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.758 [2024-11-20 09:43:43.583908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.758 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:07.016 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:07.274 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.206 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:08.206 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:08.206 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.206 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:08.206 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:08.206 09:43:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:10.105 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:10.105 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:10.105 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.105 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:10.105 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.105 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:10.105 09:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:10.105 [global] 00:10:10.105 thread=1 00:10:10.105 invalidate=1 00:10:10.105 rw=write 00:10:10.105 time_based=1 00:10:10.105 runtime=1 00:10:10.105 ioengine=libaio 00:10:10.105 direct=1 00:10:10.105 bs=4096 00:10:10.105 iodepth=1 00:10:10.105 norandommap=0 00:10:10.105 numjobs=1 00:10:10.105 00:10:10.105 verify_dump=1 00:10:10.105 verify_backlog=512 00:10:10.105 verify_state_save=0 00:10:10.105 do_verify=1 00:10:10.105 verify=crc32c-intel 00:10:10.105 [job0] 00:10:10.105 filename=/dev/nvme0n1 00:10:10.105 [job1] 00:10:10.105 filename=/dev/nvme0n2 00:10:10.105 [job2] 00:10:10.105 filename=/dev/nvme0n3 00:10:10.105 [job3] 00:10:10.105 filename=/dev/nvme0n4 00:10:10.105 Could not set queue depth (nvme0n1) 00:10:10.105 Could not set queue depth (nvme0n2) 00:10:10.105 Could not set queue depth (nvme0n3) 00:10:10.105 Could not set queue depth (nvme0n4) 00:10:10.363 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.363 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.363 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.363 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.363 fio-3.35 00:10:10.363 Starting 4 threads 00:10:11.735 00:10:11.735 job0: (groupid=0, jobs=1): err= 0: pid=3664783: Wed Nov 20 09:43:48 2024 00:10:11.735 read: IOPS=143, BW=574KiB/s (588kB/s)(584KiB/1017msec) 00:10:11.735 slat (nsec): min=7409, max=65915, avg=15884.93, stdev=8992.93 00:10:11.735 clat (usec): min=215, max=42029, avg=6173.61, stdev=14465.91 00:10:11.735 lat (usec): min=226, max=42046, avg=6189.50, stdev=14468.46 00:10:11.735 clat percentiles (usec): 00:10:11.735 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 237], 00:10:11.735 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 273], 00:10:11.735 | 70.00th=[ 293], 80.00th=[ 338], 90.00th=[41157], 95.00th=[41681], 00:10:11.735 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:11.735 | 99.99th=[42206] 00:10:11.735 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:10:11.735 slat (nsec): min=7614, max=37532, avg=11349.60, stdev=4398.17 00:10:11.735 clat (usec): min=152, max=292, avg=204.09, stdev=22.81 00:10:11.735 lat (usec): min=162, max=308, avg=215.44, stdev=23.29 00:10:11.735 clat percentiles (usec): 00:10:11.735 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 184], 00:10:11.735 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:10:11.735 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 241], 00:10:11.735 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 293], 00:10:11.735 | 99.99th=[ 293] 00:10:11.735 bw ( KiB/s): min= 4096, max= 4096, per=20.91%, avg=4096.00, stdev= 0.00, samples=1 00:10:11.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:11.735 lat (usec) : 250=86.32%, 500=10.49% 00:10:11.736 lat (msec) : 50=3.19% 00:10:11.736 cpu : usr=0.59%, sys=0.59%, ctx=660, majf=0, minf=1 00:10:11.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.736 issued rwts: total=146,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.736 job1: (groupid=0, jobs=1): err= 0: pid=3664784: Wed Nov 20 09:43:48 2024 00:10:11.736 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:10:11.736 slat (nsec): min=8287, max=38322, avg=21912.09, stdev=9619.50 00:10:11.736 clat (usec): min=26240, max=42048, avg=40655.89, stdev=3257.34 00:10:11.736 lat (usec): min=26276, max=42064, avg=40677.80, stdev=3254.29 00:10:11.736 clat percentiles (usec): 00:10:11.736 | 1.00th=[26346], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:11.736 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:11.736 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:11.736 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:11.736 | 99.99th=[42206] 00:10:11.736 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:11.736 slat (nsec): min=7035, max=35783, avg=10862.22, stdev=4113.32 00:10:11.736 clat (usec): min=149, max=376, avg=194.51, stdev=27.76 00:10:11.736 lat (usec): min=159, max=390, avg=205.37, stdev=29.25 00:10:11.736 clat percentiles (usec): 00:10:11.736 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:10:11.736 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 198], 00:10:11.736 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 237], 00:10:11.736 | 99.00th=[ 262], 99.50th=[ 293], 99.90th=[ 379], 99.95th=[ 379], 00:10:11.736 | 99.99th=[ 379] 00:10:11.736 bw ( KiB/s): min= 4096, max= 4096, per=20.91%, avg=4096.00, stdev= 0.00, samples=1 00:10:11.736 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:11.736 lat (usec) : 250=94.38%, 500=1.50% 00:10:11.736 lat (msec) : 50=4.12% 00:10:11.736 cpu : usr=0.70%, sys=0.30%, ctx=535, majf=0, minf=2 00:10:11.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.736 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.736 job2: (groupid=0, jobs=1): err= 0: pid=3664785: Wed Nov 20 09:43:48 2024 00:10:11.736 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:11.736 slat (nsec): min=5431, max=66797, avg=20710.56, stdev=11205.14 00:10:11.736 clat (usec): min=183, max=616, avg=324.86, stdev=71.34 00:10:11.736 lat (usec): min=199, max=627, avg=345.57, stdev=73.05 00:10:11.736 clat percentiles (usec): 00:10:11.736 | 1.00th=[ 204], 5.00th=[ 223], 10.00th=[ 235], 20.00th=[ 269], 00:10:11.736 | 30.00th=[ 281], 40.00th=[ 302], 50.00th=[ 322], 60.00th=[ 338], 00:10:11.736 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 416], 95.00th=[ 482], 00:10:11.736 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 619], 00:10:11.736 | 99.99th=[ 619] 00:10:11.736 write: IOPS=1906, BW=7624KiB/s (7807kB/s)(7632KiB/1001msec); 0 zone resets 00:10:11.736 slat (nsec): min=6971, max=70486, avg=16728.28, stdev=6498.04 00:10:11.736 clat (usec): min=126, max=782, avg=219.83, stdev=53.34 00:10:11.736 lat (usec): min=136, max=791, avg=236.55, stdev=53.41 00:10:11.736 clat percentiles (usec): 00:10:11.736 | 1.00th=[ 137], 5.00th=[ 159], 10.00th=[ 180], 20.00th=[ 186], 00:10:11.736 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 210], 00:10:11.736 | 70.00th=[ 227], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 318], 00:10:11.736 | 99.00th=[ 375], 99.50th=[ 429], 99.90th=[ 660], 99.95th=[ 783], 00:10:11.736 | 99.99th=[ 783] 00:10:11.736 bw ( KiB/s): min= 8192, max= 8192, per=41.82%, avg=8192.00, stdev= 0.00, samples=1 00:10:11.736 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:11.736 lat (usec) : 250=50.03%, 500=48.84%, 750=1.10%, 1000=0.03% 00:10:11.736 cpu : usr=4.20%, sys=5.80%, ctx=3447, majf=0, minf=1 00:10:11.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.736 issued rwts: total=1536,1908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.736 job3: (groupid=0, jobs=1): err= 0: pid=3664786: Wed Nov 20 09:43:48 2024 00:10:11.736 read: IOPS=1549, BW=6198KiB/s (6347kB/s)(6204KiB/1001msec) 00:10:11.736 slat (nsec): min=7392, max=69246, avg=13644.55, stdev=6682.41 00:10:11.736 clat (usec): min=185, max=627, avg=305.47, stdev=63.33 00:10:11.736 lat (usec): min=193, max=661, avg=319.12, stdev=65.92 00:10:11.736 clat percentiles (usec): 00:10:11.736 | 1.00th=[ 196], 5.00th=[ 219], 10.00th=[ 241], 20.00th=[ 269], 00:10:11.736 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 306], 00:10:11.736 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 441], 00:10:11.736 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 619], 99.95th=[ 627], 00:10:11.736 | 99.99th=[ 627] 00:10:11.736 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:11.736 slat (nsec): min=9499, max=67087, avg=19367.42, stdev=7120.54 00:10:11.736 clat (usec): min=140, max=808, avg=219.34, stdev=56.21 00:10:11.736 lat (usec): min=151, max=819, avg=238.71, stdev=55.00 00:10:11.736 clat percentiles (usec): 00:10:11.736 | 1.00th=[ 149], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 180], 00:10:11.736 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 210], 00:10:11.736 | 70.00th=[ 235], 80.00th=[ 262], 90.00th=[ 297], 95.00th=[ 326], 00:10:11.736 | 99.00th=[ 383], 99.50th=[ 424], 99.90th=[ 701], 99.95th=[ 734], 00:10:11.736 | 99.99th=[ 807] 00:10:11.736 bw ( KiB/s): min= 8192, max= 8192, per=41.82%, avg=8192.00, stdev= 0.00, samples=1 00:10:11.736 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:11.736 lat (usec) : 250=47.82%, 500=51.07%, 750=1.08%, 1000=0.03% 00:10:11.736 cpu : usr=4.80%, sys=7.40%, ctx=3603, majf=0, minf=1 00:10:11.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.736 issued rwts: total=1551,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.736 00:10:11.736 Run status group 0 (all jobs): 00:10:11.736 READ: bw=12.5MiB/s (13.1MB/s), 87.8KiB/s-6198KiB/s (89.9kB/s-6347kB/s), io=12.7MiB (13.3MB), run=1001-1017msec 00:10:11.736 WRITE: bw=19.1MiB/s (20.1MB/s), 2014KiB/s-8184KiB/s (2062kB/s-8380kB/s), io=19.5MiB (20.4MB), run=1001-1017msec 00:10:11.736 00:10:11.736 Disk stats (read/write): 00:10:11.736 nvme0n1: ios=195/512, merge=0/0, ticks=1258/100, in_queue=1358, util=98.00% 00:10:11.736 nvme0n2: ios=18/512, merge=0/0, ticks=745/100, in_queue=845, util=86.67% 00:10:11.736 nvme0n3: ios=1378/1536, merge=0/0, ticks=1374/337, in_queue=1711, util=98.23% 00:10:11.736 nvme0n4: ios=1493/1536, merge=0/0, ticks=680/334, in_queue=1014, util=98.21% 00:10:11.736 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:11.736 [global] 00:10:11.736 thread=1 00:10:11.736 invalidate=1 00:10:11.736 rw=randwrite 00:10:11.736 time_based=1 00:10:11.736 runtime=1 00:10:11.736 ioengine=libaio 00:10:11.736 direct=1 00:10:11.736 bs=4096 00:10:11.736 iodepth=1 00:10:11.736 norandommap=0 00:10:11.736 numjobs=1 00:10:11.736 00:10:11.736 verify_dump=1 00:10:11.736 verify_backlog=512 00:10:11.736 verify_state_save=0 00:10:11.736 do_verify=1 00:10:11.736 verify=crc32c-intel 00:10:11.736 [job0] 00:10:11.736 filename=/dev/nvme0n1 00:10:11.736 [job1] 00:10:11.736 filename=/dev/nvme0n2 00:10:11.736 [job2] 00:10:11.736 filename=/dev/nvme0n3 00:10:11.736 [job3] 00:10:11.736 filename=/dev/nvme0n4 00:10:11.736 Could not set queue depth (nvme0n1) 00:10:11.736 Could not set queue depth (nvme0n2) 00:10:11.736 Could not set queue depth (nvme0n3) 00:10:11.736 Could not set queue depth (nvme0n4) 00:10:11.736 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.736 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.736 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.736 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.736 fio-3.35 00:10:11.736 Starting 4 threads 00:10:13.110 00:10:13.110 job0: (groupid=0, jobs=1): err= 0: pid=3665131: Wed Nov 20 09:43:49 2024 00:10:13.110 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:10:13.110 slat (nsec): min=15328, max=37070, avg=21931.29, stdev=8570.04 00:10:13.110 clat (usec): min=40908, max=44034, avg=41887.33, stdev=613.44 00:10:13.110 lat (usec): min=40926, max=44053, avg=41909.26, stdev=614.22 00:10:13.110 clat percentiles (usec): 00:10:13.110 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:13.110 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:13.110 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:13.110 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:13.110 | 99.99th=[43779] 00:10:13.110 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:13.110 slat (nsec): min=9125, max=54710, avg=18550.09, stdev=7965.40 00:10:13.110 clat (usec): min=160, max=767, avg=224.32, stdev=57.85 00:10:13.110 lat (usec): min=184, max=786, avg=242.87, stdev=56.82 00:10:13.110 clat percentiles (usec): 00:10:13.110 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:10:13.110 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 221], 00:10:13.110 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 285], 00:10:13.110 | 99.00th=[ 449], 99.50th=[ 725], 99.90th=[ 766], 99.95th=[ 766], 00:10:13.110 | 99.99th=[ 766] 00:10:13.110 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:13.110 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:13.110 lat (usec) : 250=88.56%, 500=6.57%, 750=0.75%, 1000=0.19% 00:10:13.110 lat (msec) : 50=3.94% 00:10:13.110 cpu : usr=0.79%, sys=1.09%, ctx=536, majf=0, minf=1 00:10:13.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.111 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.111 job1: (groupid=0, jobs=1): err= 0: pid=3665132: Wed Nov 20 09:43:49 2024 00:10:13.111 read: IOPS=136, BW=546KiB/s (559kB/s)(568KiB/1040msec) 00:10:13.111 slat (nsec): min=8867, max=54315, avg=18370.87, stdev=6160.16 00:10:13.111 clat (usec): min=194, max=42112, avg=6379.81, stdev=14683.54 00:10:13.111 lat (usec): min=205, max=42131, avg=6398.18, stdev=14683.26 00:10:13.111 clat percentiles (usec): 00:10:13.111 | 1.00th=[ 208], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:10:13.111 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 289], 00:10:13.111 | 70.00th=[ 318], 80.00th=[ 347], 90.00th=[41157], 95.00th=[41681], 00:10:13.111 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.111 | 99.99th=[42206] 00:10:13.111 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:13.111 slat (nsec): min=9699, max=55659, avg=19054.09, stdev=7750.02 00:10:13.111 clat (usec): min=157, max=881, avg=229.07, stdev=56.58 00:10:13.111 lat (usec): min=169, max=893, avg=248.13, stdev=56.13 00:10:13.111 clat percentiles (usec): 00:10:13.111 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 200], 00:10:13.111 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:10:13.111 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 318], 00:10:13.111 | 99.00th=[ 408], 99.50th=[ 619], 99.90th=[ 881], 99.95th=[ 881], 00:10:13.111 | 99.99th=[ 881] 00:10:13.111 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:13.111 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:13.111 lat (usec) : 250=68.96%, 500=27.06%, 750=0.46%, 1000=0.31% 00:10:13.111 lat (msec) : 50=3.21% 00:10:13.111 cpu : usr=1.06%, sys=1.25%, ctx=655, majf=0, minf=1 00:10:13.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.111 issued rwts: total=142,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.111 job2: (groupid=0, jobs=1): err= 0: pid=3665133: Wed Nov 20 09:43:49 2024 00:10:13.111 read: IOPS=1390, BW=5562KiB/s (5696kB/s)(5568KiB/1001msec) 00:10:13.111 slat (nsec): min=7269, max=61782, avg=14502.99, stdev=5595.83 00:10:13.111 clat (usec): min=198, max=44973, avg=454.73, stdev=2965.75 00:10:13.111 lat (usec): min=206, max=44991, avg=469.23, stdev=2966.07 00:10:13.111 clat percentiles (usec): 00:10:13.111 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 229], 00:10:13.111 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:10:13.111 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:10:13.111 | 99.00th=[ 318], 99.50th=[41157], 99.90th=[42206], 99.95th=[44827], 00:10:13.111 | 99.99th=[44827] 00:10:13.111 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:13.111 slat (nsec): min=9576, max=58784, avg=18955.04, stdev=7140.72 00:10:13.111 clat (usec): min=141, max=911, avg=197.59, stdev=44.21 00:10:13.111 lat (usec): min=152, max=929, avg=216.54, stdev=45.13 00:10:13.111 clat percentiles (usec): 00:10:13.111 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 174], 00:10:13.111 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 198], 00:10:13.111 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 233], 95.00th=[ 245], 00:10:13.111 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 766], 99.95th=[ 914], 00:10:13.111 | 99.99th=[ 914] 00:10:13.111 bw ( KiB/s): min= 8192, max= 8192, per=52.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:13.111 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:13.111 lat (usec) : 250=81.39%, 500=18.20%, 750=0.10%, 1000=0.07% 00:10:13.111 lat (msec) : 50=0.24% 00:10:13.111 cpu : usr=3.20%, sys=7.00%, ctx=2929, majf=0, minf=1 00:10:13.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.111 issued rwts: total=1392,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.111 job3: (groupid=0, jobs=1): err= 0: pid=3665134: Wed Nov 20 09:43:49 2024 00:10:13.111 read: IOPS=1483, BW=5934KiB/s (6076kB/s)(5940KiB/1001msec) 00:10:13.111 slat (nsec): min=7112, max=64615, avg=12190.15, stdev=5781.88 00:10:13.111 clat (usec): min=184, max=41998, avg=431.04, stdev=2807.52 00:10:13.111 lat (usec): min=192, max=42018, avg=443.23, stdev=2808.73 00:10:13.111 clat percentiles (usec): 00:10:13.111 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:13.111 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 233], 60.00th=[ 241], 00:10:13.111 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 289], 00:10:13.111 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[42206], 99.95th=[42206], 00:10:13.111 | 99.99th=[42206] 00:10:13.111 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:13.111 slat (nsec): min=9209, max=56325, avg=16427.08, stdev=7162.16 00:10:13.111 clat (usec): min=142, max=903, avg=198.18, stdev=43.99 00:10:13.111 lat (usec): min=151, max=929, avg=214.61, stdev=44.94 00:10:13.111 clat percentiles (usec): 00:10:13.111 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 178], 00:10:13.111 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:10:13.111 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 231], 95.00th=[ 245], 00:10:13.111 | 99.00th=[ 351], 99.50th=[ 420], 99.90th=[ 848], 99.95th=[ 906], 00:10:13.111 | 99.99th=[ 906] 00:10:13.111 bw ( KiB/s): min= 9104, max= 9104, per=57.79%, avg=9104.00, stdev= 0.00, samples=1 00:10:13.111 iops : min= 2276, max= 2276, avg=2276.00, stdev= 0.00, samples=1 00:10:13.111 lat (usec) : 250=85.10%, 500=13.94%, 750=0.63%, 1000=0.10% 00:10:13.111 lat (msec) : 50=0.23% 00:10:13.111 cpu : usr=3.30%, sys=5.70%, ctx=3022, majf=0, minf=1 00:10:13.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.111 issued rwts: total=1485,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.111 00:10:13.111 Run status group 0 (all jobs): 00:10:13.111 READ: bw=11.4MiB/s (12.0MB/s), 83.3KiB/s-5934KiB/s (85.3kB/s-6076kB/s), io=11.9MiB (12.5MB), run=1001-1040msec 00:10:13.111 WRITE: bw=15.4MiB/s (16.1MB/s), 1969KiB/s-6138KiB/s (2016kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1040msec 00:10:13.111 00:10:13.111 Disk stats (read/write): 00:10:13.111 nvme0n1: ios=68/512, merge=0/0, ticks=923/110, in_queue=1033, util=94.39% 00:10:13.111 nvme0n2: ios=159/512, merge=0/0, ticks=1697/106, in_queue=1803, util=100.00% 00:10:13.111 nvme0n3: ios=1084/1536, merge=0/0, ticks=1309/288, in_queue=1597, util=98.23% 00:10:13.111 nvme0n4: ios=1075/1536, merge=0/0, ticks=708/279, in_queue=987, util=99.79% 00:10:13.111 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:13.111 [global] 00:10:13.111 thread=1 00:10:13.111 invalidate=1 00:10:13.111 rw=write 00:10:13.111 time_based=1 00:10:13.111 runtime=1 00:10:13.111 ioengine=libaio 00:10:13.111 direct=1 00:10:13.111 bs=4096 00:10:13.111 iodepth=128 00:10:13.111 norandommap=0 00:10:13.111 numjobs=1 00:10:13.111 00:10:13.111 verify_dump=1 00:10:13.111 verify_backlog=512 00:10:13.111 verify_state_save=0 00:10:13.111 do_verify=1 00:10:13.111 verify=crc32c-intel 00:10:13.111 [job0] 00:10:13.111 filename=/dev/nvme0n1 00:10:13.111 [job1] 00:10:13.111 filename=/dev/nvme0n2 00:10:13.111 [job2] 00:10:13.111 filename=/dev/nvme0n3 00:10:13.111 [job3] 00:10:13.111 filename=/dev/nvme0n4 00:10:13.111 Could not set queue depth (nvme0n1) 00:10:13.111 Could not set queue depth (nvme0n2) 00:10:13.111 Could not set queue depth (nvme0n3) 00:10:13.111 Could not set queue depth (nvme0n4) 00:10:13.370 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.370 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.370 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.370 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.370 fio-3.35 00:10:13.370 Starting 4 threads 00:10:14.744 00:10:14.744 job0: (groupid=0, jobs=1): err= 0: pid=3665369: Wed Nov 20 09:43:51 2024 00:10:14.744 read: IOPS=4158, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1002msec) 00:10:14.744 slat (usec): min=3, max=8567, avg=97.08, stdev=538.32 00:10:14.744 clat (usec): min=1097, max=28025, avg=12607.14, stdev=3188.04 00:10:14.744 lat (usec): min=1111, max=28046, avg=12704.22, stdev=3235.32 00:10:14.744 clat percentiles (usec): 00:10:14.744 | 1.00th=[ 6259], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10683], 00:10:14.744 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:10:14.744 | 70.00th=[13304], 80.00th=[14091], 90.00th=[16909], 95.00th=[19268], 00:10:14.744 | 99.00th=[22152], 99.50th=[24773], 99.90th=[27919], 99.95th=[27919], 00:10:14.744 | 99.99th=[27919] 00:10:14.744 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:14.744 slat (usec): min=4, max=8340, avg=117.69, stdev=564.89 00:10:14.744 clat (usec): min=5895, max=72652, avg=16088.57, stdev=11642.61 00:10:14.744 lat (usec): min=5904, max=72681, avg=16206.26, stdev=11720.01 00:10:14.744 clat percentiles (usec): 00:10:14.744 | 1.00th=[ 6915], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[10683], 00:10:14.744 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:10:14.744 | 70.00th=[13435], 80.00th=[18482], 90.00th=[27919], 95.00th=[41157], 00:10:14.744 | 99.00th=[67634], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:10:14.744 | 99.99th=[72877] 00:10:14.744 bw ( KiB/s): min=12920, max=23496, per=27.99%, avg=18208.00, stdev=7478.36, samples=2 00:10:14.744 iops : min= 3230, max= 5874, avg=4552.00, stdev=1869.59, samples=2 00:10:14.744 lat (msec) : 2=0.09%, 10=7.51%, 20=81.25%, 50=9.24%, 100=1.90% 00:10:14.744 cpu : usr=6.89%, sys=10.99%, ctx=441, majf=0, minf=1 00:10:14.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:14.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.745 issued rwts: total=4167,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.745 job1: (groupid=0, jobs=1): err= 0: pid=3665370: Wed Nov 20 09:43:51 2024 00:10:14.745 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:10:14.745 slat (usec): min=2, max=16870, avg=123.48, stdev=818.51 00:10:14.745 clat (usec): min=3690, max=40358, avg=15527.08, stdev=6323.18 00:10:14.745 lat (usec): min=3694, max=44141, avg=15650.55, stdev=6395.91 00:10:14.745 clat percentiles (usec): 00:10:14.745 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:10:14.745 | 30.00th=[11338], 40.00th=[12256], 50.00th=[12649], 60.00th=[13566], 00:10:14.745 | 70.00th=[17171], 80.00th=[20317], 90.00th=[24511], 95.00th=[29230], 00:10:14.745 | 99.00th=[36439], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:10:14.745 | 99.99th=[40109] 00:10:14.745 write: IOPS=3660, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1005msec); 0 zone resets 00:10:14.745 slat (usec): min=3, max=9385, avg=144.20, stdev=795.06 00:10:14.745 clat (usec): min=749, max=62372, avg=19544.09, stdev=15629.48 00:10:14.745 lat (usec): min=811, max=62384, avg=19688.30, stdev=15732.76 00:10:14.745 clat percentiles (usec): 00:10:14.745 | 1.00th=[ 3589], 5.00th=[ 7439], 10.00th=[ 9765], 20.00th=[10552], 00:10:14.745 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[12518], 00:10:14.745 | 70.00th=[15139], 80.00th=[34341], 90.00th=[50070], 95.00th=[54789], 00:10:14.745 | 99.00th=[61604], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:10:14.745 | 99.99th=[62129] 00:10:14.745 bw ( KiB/s): min= 8192, max=20480, per=22.03%, avg=14336.00, stdev=8688.93, samples=2 00:10:14.745 iops : min= 2048, max= 5120, avg=3584.00, stdev=2172.23, samples=2 00:10:14.745 lat (usec) : 750=0.01%, 1000=0.18% 00:10:14.745 lat (msec) : 2=0.04%, 4=0.62%, 10=7.83%, 20=66.52%, 50=19.69% 00:10:14.745 lat (msec) : 100=5.11% 00:10:14.745 cpu : usr=2.59%, sys=5.18%, ctx=330, majf=0, minf=1 00:10:14.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:14.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.745 issued rwts: total=3584,3679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.745 job2: (groupid=0, jobs=1): err= 0: pid=3665371: Wed Nov 20 09:43:51 2024 00:10:14.745 read: IOPS=3700, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1003msec) 00:10:14.745 slat (usec): min=2, max=13724, avg=128.28, stdev=838.26 00:10:14.745 clat (usec): min=2702, max=37833, avg=16933.07, stdev=4018.40 00:10:14.745 lat (usec): min=2744, max=37838, avg=17061.36, stdev=4070.02 00:10:14.745 clat percentiles (usec): 00:10:14.745 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11994], 20.00th=[14222], 00:10:14.745 | 30.00th=[15139], 40.00th=[16319], 50.00th=[16909], 60.00th=[17171], 00:10:14.745 | 70.00th=[17695], 80.00th=[19792], 90.00th=[21103], 95.00th=[23725], 00:10:14.745 | 99.00th=[32375], 99.50th=[35914], 99.90th=[35914], 99.95th=[38011], 00:10:14.745 | 99.99th=[38011] 00:10:14.745 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:14.745 slat (usec): min=3, max=14653, avg=120.81, stdev=859.33 00:10:14.745 clat (usec): min=3754, max=37763, avg=15701.44, stdev=4529.24 00:10:14.745 lat (usec): min=3759, max=37771, avg=15822.25, stdev=4603.91 00:10:14.745 clat percentiles (usec): 00:10:14.745 | 1.00th=[ 7439], 5.00th=[10945], 10.00th=[11600], 20.00th=[13042], 00:10:14.745 | 30.00th=[13566], 40.00th=[14222], 50.00th=[15008], 60.00th=[15270], 00:10:14.745 | 70.00th=[15926], 80.00th=[17695], 90.00th=[22152], 95.00th=[23200], 00:10:14.745 | 99.00th=[32637], 99.50th=[36963], 99.90th=[37487], 99.95th=[38011], 00:10:14.745 | 99.99th=[38011] 00:10:14.745 bw ( KiB/s): min=16048, max=16720, per=25.18%, avg=16384.00, stdev=475.18, samples=2 00:10:14.745 iops : min= 4012, max= 4180, avg=4096.00, stdev=118.79, samples=2 00:10:14.745 lat (msec) : 4=0.08%, 10=3.27%, 20=80.57%, 50=16.09% 00:10:14.745 cpu : usr=2.69%, sys=6.18%, ctx=263, majf=0, minf=1 00:10:14.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:14.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.745 issued rwts: total=3712,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.745 job3: (groupid=0, jobs=1): err= 0: pid=3665372: Wed Nov 20 09:43:51 2024 00:10:14.745 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:14.745 slat (usec): min=2, max=13031, avg=109.04, stdev=727.09 00:10:14.745 clat (usec): min=2415, max=36947, avg=14168.52, stdev=4466.50 00:10:14.745 lat (usec): min=2424, max=36957, avg=14277.57, stdev=4506.56 00:10:14.745 clat percentiles (usec): 00:10:14.745 | 1.00th=[ 4621], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[11076], 00:10:14.745 | 30.00th=[11600], 40.00th=[12125], 50.00th=[13566], 60.00th=[14877], 00:10:14.745 | 70.00th=[16319], 80.00th=[16909], 90.00th=[18744], 95.00th=[21103], 00:10:14.745 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:10:14.745 | 99.99th=[36963] 00:10:14.745 write: IOPS=3956, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1002msec); 0 zone resets 00:10:14.745 slat (usec): min=3, max=15390, avg=135.60, stdev=747.70 00:10:14.745 clat (usec): min=253, max=98736, avg=19286.49, stdev=15746.20 00:10:14.745 lat (usec): min=377, max=98750, avg=19422.09, stdev=15828.24 00:10:14.745 clat percentiles (usec): 00:10:14.745 | 1.00th=[ 2802], 5.00th=[ 5407], 10.00th=[ 7504], 20.00th=[10552], 00:10:14.745 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13566], 60.00th=[15664], 00:10:14.745 | 70.00th=[17695], 80.00th=[26084], 90.00th=[34866], 95.00th=[49021], 00:10:14.745 | 99.00th=[94897], 99.50th=[98042], 99.90th=[99091], 99.95th=[99091], 00:10:14.745 | 99.99th=[99091] 00:10:14.745 bw ( KiB/s): min=12240, max=18472, per=23.60%, avg=15356.00, stdev=4406.69, samples=2 00:10:14.745 iops : min= 3060, max= 4618, avg=3839.00, stdev=1101.67, samples=2 00:10:14.745 lat (usec) : 500=0.03%, 750=0.07% 00:10:14.745 lat (msec) : 2=0.32%, 4=0.78%, 10=13.51%, 20=67.50%, 50=15.30% 00:10:14.745 lat (msec) : 100=2.49% 00:10:14.745 cpu : usr=2.70%, sys=6.49%, ctx=455, majf=0, minf=2 00:10:14.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:14.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.745 issued rwts: total=3584,3964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.745 00:10:14.745 Run status group 0 (all jobs): 00:10:14.745 READ: bw=58.5MiB/s (61.3MB/s), 13.9MiB/s-16.2MiB/s (14.6MB/s-17.0MB/s), io=58.8MiB (61.6MB), run=1002-1005msec 00:10:14.745 WRITE: bw=63.5MiB/s (66.6MB/s), 14.3MiB/s-18.0MiB/s (15.0MB/s-18.8MB/s), io=63.9MiB (67.0MB), run=1002-1005msec 00:10:14.745 00:10:14.745 Disk stats (read/write): 00:10:14.745 nvme0n1: ios=3495/3584, merge=0/0, ticks=22062/29421, in_queue=51483, util=97.90% 00:10:14.745 nvme0n2: ios=3092/3447, merge=0/0, ticks=22593/31140, in_queue=53733, util=86.79% 00:10:14.745 nvme0n3: ios=3190/3584, merge=0/0, ticks=25325/25758, in_queue=51083, util=98.33% 00:10:14.745 nvme0n4: ios=2874/3072, merge=0/0, ticks=29602/48960, in_queue=78562, util=89.26% 00:10:14.745 09:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:14.745 [global] 00:10:14.745 thread=1 00:10:14.745 invalidate=1 00:10:14.745 rw=randwrite 00:10:14.745 time_based=1 00:10:14.745 runtime=1 00:10:14.745 ioengine=libaio 00:10:14.745 direct=1 00:10:14.745 bs=4096 00:10:14.745 iodepth=128 00:10:14.745 norandommap=0 00:10:14.745 numjobs=1 00:10:14.745 00:10:14.745 verify_dump=1 00:10:14.745 verify_backlog=512 00:10:14.745 verify_state_save=0 00:10:14.745 do_verify=1 00:10:14.745 verify=crc32c-intel 00:10:14.745 [job0] 00:10:14.745 filename=/dev/nvme0n1 00:10:14.745 [job1] 00:10:14.745 filename=/dev/nvme0n2 00:10:14.745 [job2] 00:10:14.745 filename=/dev/nvme0n3 00:10:14.745 [job3] 00:10:14.745 filename=/dev/nvme0n4 00:10:14.745 Could not set queue depth (nvme0n1) 00:10:14.745 Could not set queue depth (nvme0n2) 00:10:14.745 Could not set queue depth (nvme0n3) 00:10:14.745 Could not set queue depth (nvme0n4) 00:10:14.745 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.745 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.745 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.745 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.745 fio-3.35 00:10:14.745 Starting 4 threads 00:10:16.119 00:10:16.119 job0: (groupid=0, jobs=1): err= 0: pid=3665600: Wed Nov 20 09:43:52 2024 00:10:16.119 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:16.119 slat (usec): min=2, max=10274, avg=105.88, stdev=636.56 00:10:16.119 clat (usec): min=6060, max=42097, avg=12949.52, stdev=4257.64 00:10:16.119 lat (usec): min=6074, max=42105, avg=13055.40, stdev=4311.04 00:10:16.119 clat percentiles (usec): 00:10:16.119 | 1.00th=[ 7570], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10814], 00:10:16.119 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:10:16.119 | 70.00th=[12780], 80.00th=[14353], 90.00th=[17695], 95.00th=[21890], 00:10:16.119 | 99.00th=[30540], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:10:16.119 | 99.99th=[42206] 00:10:16.119 write: IOPS=4537, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1004msec); 0 zone resets 00:10:16.119 slat (usec): min=4, max=8964, avg=112.30, stdev=519.29 00:10:16.119 clat (usec): min=520, max=42103, avg=16317.77, stdev=7012.67 00:10:16.119 lat (usec): min=1281, max=42119, avg=16430.07, stdev=7059.08 00:10:16.119 clat percentiles (usec): 00:10:16.119 | 1.00th=[ 5866], 5.00th=[ 8225], 10.00th=[10290], 20.00th=[11207], 00:10:16.119 | 30.00th=[11731], 40.00th=[12125], 50.00th=[13042], 60.00th=[15795], 00:10:16.119 | 70.00th=[20317], 80.00th=[21890], 90.00th=[26870], 95.00th=[30016], 00:10:16.119 | 99.00th=[35914], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:10:16.119 | 99.99th=[42206] 00:10:16.119 bw ( KiB/s): min=16384, max=19040, per=31.23%, avg=17712.00, stdev=1878.08, samples=2 00:10:16.119 iops : min= 4096, max= 4760, avg=4428.00, stdev=469.52, samples=2 00:10:16.119 lat (usec) : 750=0.01% 00:10:16.119 lat (msec) : 2=0.02%, 10=8.65%, 20=71.87%, 50=19.45% 00:10:16.119 cpu : usr=7.28%, sys=10.37%, ctx=500, majf=0, minf=2 00:10:16.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.119 issued rwts: total=4096,4556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.119 job1: (groupid=0, jobs=1): err= 0: pid=3665601: Wed Nov 20 09:43:52 2024 00:10:16.119 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:10:16.119 slat (usec): min=3, max=11408, avg=147.16, stdev=825.06 00:10:16.119 clat (usec): min=9896, max=33348, avg=18247.76, stdev=3988.91 00:10:16.119 lat (usec): min=9926, max=33387, avg=18394.92, stdev=4054.76 00:10:16.119 clat percentiles (usec): 00:10:16.119 | 1.00th=[11469], 5.00th=[13304], 10.00th=[13829], 20.00th=[14484], 00:10:16.119 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17433], 60.00th=[18220], 00:10:16.119 | 70.00th=[19792], 80.00th=[21890], 90.00th=[23725], 95.00th=[25560], 00:10:16.119 | 99.00th=[31065], 99.50th=[32375], 99.90th=[32900], 99.95th=[33162], 00:10:16.119 | 99.99th=[33424] 00:10:16.119 write: IOPS=3286, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1005msec); 0 zone resets 00:10:16.119 slat (usec): min=4, max=12628, avg=152.92, stdev=696.60 00:10:16.119 clat (usec): min=4827, max=38452, avg=21505.15, stdev=5030.87 00:10:16.119 lat (usec): min=5460, max=38469, avg=21658.07, stdev=5072.72 00:10:16.119 clat percentiles (usec): 00:10:16.119 | 1.00th=[10683], 5.00th=[13566], 10.00th=[16057], 20.00th=[16581], 00:10:16.119 | 30.00th=[19530], 40.00th=[20841], 50.00th=[21890], 60.00th=[22676], 00:10:16.119 | 70.00th=[23200], 80.00th=[24249], 90.00th=[27919], 95.00th=[30278], 00:10:16.119 | 99.00th=[34866], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:10:16.119 | 99.99th=[38536] 00:10:16.119 bw ( KiB/s): min=12472, max=12936, per=22.40%, avg=12704.00, stdev=328.10, samples=2 00:10:16.119 iops : min= 3118, max= 3234, avg=3176.00, stdev=82.02, samples=2 00:10:16.119 lat (msec) : 10=0.52%, 20=51.89%, 50=47.59% 00:10:16.119 cpu : usr=4.88%, sys=9.16%, ctx=381, majf=0, minf=1 00:10:16.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.119 issued rwts: total=3072,3303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.119 job2: (groupid=0, jobs=1): err= 0: pid=3665605: Wed Nov 20 09:43:52 2024 00:10:16.119 read: IOPS=2416, BW=9664KiB/s (9896kB/s)(9732KiB/1007msec) 00:10:16.119 slat (usec): min=3, max=15064, avg=207.85, stdev=1172.36 00:10:16.119 clat (usec): min=5455, max=43078, avg=25207.26, stdev=6671.43 00:10:16.119 lat (usec): min=11104, max=45739, avg=25415.12, stdev=6644.65 00:10:16.119 clat percentiles (usec): 00:10:16.119 | 1.00th=[11338], 5.00th=[17171], 10.00th=[18744], 20.00th=[20841], 00:10:16.119 | 30.00th=[21627], 40.00th=[22414], 50.00th=[22938], 60.00th=[23462], 00:10:16.119 | 70.00th=[26346], 80.00th=[30278], 90.00th=[37487], 95.00th=[39060], 00:10:16.119 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:16.119 | 99.99th=[43254] 00:10:16.119 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:10:16.119 slat (usec): min=4, max=10825, avg=182.68, stdev=1017.91 00:10:16.119 clat (usec): min=11395, max=45830, avg=25694.47, stdev=7494.57 00:10:16.119 lat (usec): min=12167, max=45844, avg=25877.15, stdev=7460.73 00:10:16.119 clat percentiles (usec): 00:10:16.119 | 1.00th=[12256], 5.00th=[15008], 10.00th=[15926], 20.00th=[17957], 00:10:16.119 | 30.00th=[21890], 40.00th=[23200], 50.00th=[25035], 60.00th=[27919], 00:10:16.119 | 70.00th=[29492], 80.00th=[32637], 90.00th=[33424], 95.00th=[40633], 00:10:16.119 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:10:16.119 | 99.99th=[45876] 00:10:16.119 bw ( KiB/s): min= 8192, max=12288, per=18.06%, avg=10240.00, stdev=2896.31, samples=2 00:10:16.119 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:16.119 lat (msec) : 10=0.02%, 20=18.85%, 50=81.13% 00:10:16.119 cpu : usr=3.78%, sys=5.27%, ctx=163, majf=0, minf=1 00:10:16.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.119 issued rwts: total=2433,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.119 job3: (groupid=0, jobs=1): err= 0: pid=3665606: Wed Nov 20 09:43:52 2024 00:10:16.119 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:16.119 slat (usec): min=2, max=19893, avg=123.66, stdev=843.44 00:10:16.119 clat (usec): min=6652, max=54086, avg=16192.96, stdev=6391.00 00:10:16.119 lat (usec): min=6662, max=54102, avg=16316.62, stdev=6450.41 00:10:16.119 clat percentiles (usec): 00:10:16.119 | 1.00th=[ 6718], 5.00th=[10421], 10.00th=[11600], 20.00th=[12256], 00:10:16.119 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14746], 60.00th=[15926], 00:10:16.119 | 70.00th=[16581], 80.00th=[17695], 90.00th=[23462], 95.00th=[27919], 00:10:16.119 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[53740], 00:10:16.119 | 99.99th=[54264] 00:10:16.119 write: IOPS=3846, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1003msec); 0 zone resets 00:10:16.119 slat (usec): min=3, max=9922, avg=135.39, stdev=725.71 00:10:16.119 clat (usec): min=448, max=54596, avg=17732.52, stdev=10136.87 00:10:16.119 lat (usec): min=4474, max=54613, avg=17867.91, stdev=10194.89 00:10:16.119 clat percentiles (usec): 00:10:16.119 | 1.00th=[ 4621], 5.00th=[ 8979], 10.00th=[11469], 20.00th=[11994], 00:10:16.119 | 30.00th=[12256], 40.00th=[13435], 50.00th=[13698], 60.00th=[15139], 00:10:16.119 | 70.00th=[19268], 80.00th=[21103], 90.00th=[27657], 95.00th=[44827], 00:10:16.119 | 99.00th=[54264], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:10:16.119 | 99.99th=[54789] 00:10:16.119 bw ( KiB/s): min=12688, max=17152, per=26.31%, avg=14920.00, stdev=3156.52, samples=2 00:10:16.119 iops : min= 3172, max= 4288, avg=3730.00, stdev=789.13, samples=2 00:10:16.119 lat (usec) : 500=0.01% 00:10:16.119 lat (msec) : 10=5.55%, 20=73.88%, 50=18.34%, 100=2.22% 00:10:16.119 cpu : usr=4.09%, sys=7.19%, ctx=311, majf=0, minf=1 00:10:16.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.120 issued rwts: total=3584,3858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.120 00:10:16.120 Run status group 0 (all jobs): 00:10:16.120 READ: bw=51.1MiB/s (53.6MB/s), 9664KiB/s-15.9MiB/s (9896kB/s-16.7MB/s), io=51.5MiB (54.0MB), run=1003-1007msec 00:10:16.120 WRITE: bw=55.4MiB/s (58.1MB/s), 9.93MiB/s-17.7MiB/s (10.4MB/s-18.6MB/s), io=55.8MiB (58.5MB), run=1003-1007msec 00:10:16.120 00:10:16.120 Disk stats (read/write): 00:10:16.120 nvme0n1: ios=3558/3584, merge=0/0, ticks=34808/53856, in_queue=88664, util=86.77% 00:10:16.120 nvme0n2: ios=2599/2682, merge=0/0, ticks=23602/27761, in_queue=51363, util=98.98% 00:10:16.120 nvme0n3: ios=2103/2368, merge=0/0, ticks=13118/13354, in_queue=26472, util=99.48% 00:10:16.120 nvme0n4: ios=3050/3072, merge=0/0, ticks=20820/21734, in_queue=42554, util=98.84% 00:10:16.120 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:16.120 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3665742 00:10:16.120 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:16.120 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:16.120 [global] 00:10:16.120 thread=1 00:10:16.120 invalidate=1 00:10:16.120 rw=read 00:10:16.120 time_based=1 00:10:16.120 runtime=10 00:10:16.120 ioengine=libaio 00:10:16.120 direct=1 00:10:16.120 bs=4096 00:10:16.120 iodepth=1 00:10:16.120 norandommap=1 00:10:16.120 numjobs=1 00:10:16.120 00:10:16.120 [job0] 00:10:16.120 filename=/dev/nvme0n1 00:10:16.120 [job1] 00:10:16.120 filename=/dev/nvme0n2 00:10:16.120 [job2] 00:10:16.120 filename=/dev/nvme0n3 00:10:16.120 [job3] 00:10:16.120 filename=/dev/nvme0n4 00:10:16.120 Could not set queue depth (nvme0n1) 00:10:16.120 Could not set queue depth (nvme0n2) 00:10:16.120 Could not set queue depth (nvme0n3) 00:10:16.120 Could not set queue depth (nvme0n4) 00:10:16.120 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.120 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.120 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.120 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.120 fio-3.35 00:10:16.120 Starting 4 threads 00:10:19.398 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:19.398 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39444480, buflen=4096 00:10:19.398 fio: pid=3665888, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:19.398 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:19.655 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.655 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:19.655 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1445888, buflen=4096 00:10:19.655 fio: pid=3665876, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:19.913 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.913 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:19.914 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42192896, buflen=4096 00:10:19.914 fio: pid=3665841, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:20.172 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58826752, buflen=4096 00:10:20.172 fio: pid=3665849, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:20.172 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.172 09:43:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:20.172 00:10:20.172 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3665841: Wed Nov 20 09:43:56 2024 00:10:20.172 read: IOPS=2897, BW=11.3MiB/s (11.9MB/s)(40.2MiB/3555msec) 00:10:20.173 slat (usec): min=4, max=23939, avg=15.09, stdev=242.96 00:10:20.173 clat (usec): min=174, max=42150, avg=324.81, stdev=1782.68 00:10:20.173 lat (usec): min=180, max=64987, avg=339.90, stdev=1864.20 00:10:20.173 clat percentiles (usec): 00:10:20.173 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:20.173 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 229], 00:10:20.173 | 70.00th=[ 258], 80.00th=[ 306], 90.00th=[ 351], 95.00th=[ 375], 00:10:20.173 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[42206], 99.95th=[42206], 00:10:20.173 | 99.99th=[42206] 00:10:20.173 bw ( KiB/s): min= 7120, max=17880, per=37.93%, avg=13718.67, stdev=4144.96, samples=6 00:10:20.173 iops : min= 1780, max= 4470, avg=3429.67, stdev=1036.24, samples=6 00:10:20.173 lat (usec) : 250=68.09%, 500=31.68%, 750=0.02% 00:10:20.173 lat (msec) : 4=0.01%, 50=0.18% 00:10:20.173 cpu : usr=1.63%, sys=4.11%, ctx=10304, majf=0, minf=1 00:10:20.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.173 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.173 issued rwts: total=10302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.173 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3665849: Wed Nov 20 09:43:56 2024 00:10:20.173 read: IOPS=3748, BW=14.6MiB/s (15.4MB/s)(56.1MiB/3832msec) 00:10:20.173 slat (usec): min=3, max=29369, avg=17.21, stdev=352.86 00:10:20.173 clat (usec): min=159, max=41271, avg=244.77, stdev=348.92 00:10:20.173 lat (usec): min=164, max=41281, avg=261.98, stdev=498.10 00:10:20.173 clat percentiles (usec): 00:10:20.173 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 194], 00:10:20.173 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 239], 00:10:20.173 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 351], 00:10:20.173 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 758], 99.95th=[ 938], 00:10:20.173 | 99.99th=[ 1237] 00:10:20.173 bw ( KiB/s): min=12000, max=18360, per=41.38%, avg=14966.71, stdev=1986.66, samples=7 00:10:20.173 iops : min= 3000, max= 4590, avg=3741.57, stdev=496.58, samples=7 00:10:20.173 lat (usec) : 250=69.08%, 500=29.69%, 750=1.12%, 1000=0.08% 00:10:20.173 lat (msec) : 2=0.01%, 50=0.01% 00:10:20.173 cpu : usr=1.75%, sys=4.88%, ctx=14369, majf=0, minf=2 00:10:20.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.173 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.173 issued rwts: total=14363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.173 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3665876: Wed Nov 20 09:43:56 2024 00:10:20.173 read: IOPS=108, BW=431KiB/s (441kB/s)(1412KiB/3277msec) 00:10:20.173 slat (usec): min=8, max=17917, avg=69.46, stdev=951.36 00:10:20.173 clat (usec): min=208, max=43958, avg=9127.52, stdev=16914.63 00:10:20.173 lat (usec): min=216, max=59995, avg=9197.13, stdev=17039.40 00:10:20.173 clat percentiles (usec): 00:10:20.173 | 1.00th=[ 233], 5.00th=[ 260], 10.00th=[ 277], 20.00th=[ 293], 00:10:20.173 | 30.00th=[ 310], 40.00th=[ 359], 50.00th=[ 379], 60.00th=[ 396], 00:10:20.173 | 70.00th=[ 420], 80.00th=[40633], 90.00th=[41681], 95.00th=[42206], 00:10:20.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:10:20.173 | 99.99th=[43779] 00:10:20.173 bw ( KiB/s): min= 192, max= 864, per=1.28%, avg=462.67, stdev=281.91, samples=6 00:10:20.173 iops : min= 48, max= 216, avg=115.67, stdev=70.48, samples=6 00:10:20.173 lat (usec) : 250=2.82%, 500=72.32%, 750=2.82%, 1000=0.56% 00:10:20.173 lat (msec) : 50=21.19% 00:10:20.173 cpu : usr=0.06%, sys=0.34%, ctx=356, majf=0, minf=2 00:10:20.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.173 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.173 issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.173 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3665888: Wed Nov 20 09:43:56 2024 00:10:20.173 read: IOPS=3274, BW=12.8MiB/s (13.4MB/s)(37.6MiB/2941msec) 00:10:20.173 slat (nsec): min=4552, max=78058, avg=15726.02, stdev=9347.72 00:10:20.173 clat (usec): min=196, max=1158, avg=284.93, stdev=53.73 00:10:20.173 lat (usec): min=201, max=1189, avg=300.66, stdev=58.15 00:10:20.173 clat percentiles (usec): 00:10:20.173 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 245], 00:10:20.173 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:10:20.173 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 383], 00:10:20.173 | 99.00th=[ 482], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 734], 00:10:20.173 | 99.99th=[ 1156] 00:10:20.173 bw ( KiB/s): min=10776, max=14424, per=35.28%, avg=12758.40, stdev=1592.07, samples=5 00:10:20.173 iops : min= 2694, max= 3606, avg=3189.60, stdev=398.02, samples=5 00:10:20.173 lat (usec) : 250=25.50%, 500=73.77%, 750=0.67%, 1000=0.03% 00:10:20.173 lat (msec) : 2=0.01% 00:10:20.173 cpu : usr=3.23%, sys=6.63%, ctx=9633, majf=0, minf=1 00:10:20.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.173 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.173 issued rwts: total=9631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.173 00:10:20.173 Run status group 0 (all jobs): 00:10:20.173 READ: bw=35.3MiB/s (37.0MB/s), 431KiB/s-14.6MiB/s (441kB/s-15.4MB/s), io=135MiB (142MB), run=2941-3832msec 00:10:20.173 00:10:20.173 Disk stats (read/write): 00:10:20.173 nvme0n1: ios=10296/0, merge=0/0, ticks=3076/0, in_queue=3076, util=95.25% 00:10:20.173 nvme0n2: ios=13494/0, merge=0/0, ticks=3223/0, in_queue=3223, util=94.72% 00:10:20.173 nvme0n3: ios=389/0, merge=0/0, ticks=3951/0, in_queue=3951, util=99.03% 00:10:20.173 nvme0n4: ios=9342/0, merge=0/0, ticks=2968/0, in_queue=2968, util=99.49% 00:10:20.431 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.431 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:20.689 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.689 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:20.946 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.946 09:43:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:21.204 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.204 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:21.460 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:21.460 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3665742 00:10:21.461 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:21.461 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:21.718 nvmf hotplug test: fio failed as expected 00:10:21.718 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.976 rmmod nvme_tcp 00:10:21.976 rmmod nvme_fabrics 00:10:21.976 rmmod nvme_keyring 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3663701 ']' 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3663701 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3663701 ']' 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3663701 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.976 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663701 00:10:22.234 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.234 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.234 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663701' 00:10:22.234 killing process with pid 3663701 00:10:22.234 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3663701 00:10:22.234 09:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3663701 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.234 09:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.771 00:10:24.771 real 0m24.346s 00:10:24.771 user 1m25.059s 00:10:24.771 sys 0m7.552s 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.771 ************************************ 00:10:24.771 END TEST nvmf_fio_target 00:10:24.771 ************************************ 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.771 ************************************ 00:10:24.771 START TEST nvmf_bdevio 00:10:24.771 ************************************ 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:24.771 * Looking for test storage... 00:10:24.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:24.771 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.772 --rc genhtml_branch_coverage=1 00:10:24.772 --rc genhtml_function_coverage=1 00:10:24.772 --rc genhtml_legend=1 00:10:24.772 --rc geninfo_all_blocks=1 00:10:24.772 --rc geninfo_unexecuted_blocks=1 00:10:24.772 00:10:24.772 ' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.772 --rc genhtml_branch_coverage=1 00:10:24.772 --rc genhtml_function_coverage=1 00:10:24.772 --rc genhtml_legend=1 00:10:24.772 --rc geninfo_all_blocks=1 00:10:24.772 --rc geninfo_unexecuted_blocks=1 00:10:24.772 00:10:24.772 ' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.772 --rc genhtml_branch_coverage=1 00:10:24.772 --rc genhtml_function_coverage=1 00:10:24.772 --rc genhtml_legend=1 00:10:24.772 --rc geninfo_all_blocks=1 00:10:24.772 --rc geninfo_unexecuted_blocks=1 00:10:24.772 00:10:24.772 ' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.772 --rc genhtml_branch_coverage=1 00:10:24.772 --rc genhtml_function_coverage=1 00:10:24.772 --rc genhtml_legend=1 00:10:24.772 --rc geninfo_all_blocks=1 00:10:24.772 --rc geninfo_unexecuted_blocks=1 00:10:24.772 00:10:24.772 ' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.772 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.773 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.773 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.773 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.773 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:26.675 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:26.675 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.675 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:26.676 Found net devices under 0000:09:00.0: cvl_0_0 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:26.676 Found net devices under 0000:09:00.1: cvl_0_1 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.676 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:10:26.934 00:10:26.934 --- 10.0.0.2 ping statistics --- 00:10:26.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.934 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:10:26.934 00:10:26.934 --- 10.0.0.1 ping statistics --- 00:10:26.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.934 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.934 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3668589 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3668589 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3668589 ']' 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.935 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.935 [2024-11-20 09:44:03.740648] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:10:26.935 [2024-11-20 09:44:03.740744] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.935 [2024-11-20 09:44:03.810994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.194 [2024-11-20 09:44:03.868095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.194 [2024-11-20 09:44:03.868144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.194 [2024-11-20 09:44:03.868171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.194 [2024-11-20 09:44:03.868182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.194 [2024-11-20 09:44:03.868192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.194 [2024-11-20 09:44:03.869898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:27.194 [2024-11-20 09:44:03.869964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:27.194 [2024-11-20 09:44:03.870027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:27.194 [2024-11-20 09:44:03.870030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.194 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.194 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:27.194 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.194 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.194 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.194 [2024-11-20 09:44:04.014445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.194 Malloc0 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.194 [2024-11-20 09:44:04.081402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:27.194 { 00:10:27.194 "params": { 00:10:27.194 "name": "Nvme$subsystem", 00:10:27.194 "trtype": "$TEST_TRANSPORT", 00:10:27.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.194 "adrfam": "ipv4", 00:10:27.194 "trsvcid": "$NVMF_PORT", 00:10:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.194 "hdgst": ${hdgst:-false}, 00:10:27.194 "ddgst": ${ddgst:-false} 00:10:27.194 }, 00:10:27.194 "method": "bdev_nvme_attach_controller" 00:10:27.194 } 00:10:27.194 EOF 00:10:27.194 )") 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:27.194 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:27.194 "params": { 00:10:27.194 "name": "Nvme1", 00:10:27.194 "trtype": "tcp", 00:10:27.194 "traddr": "10.0.0.2", 00:10:27.194 "adrfam": "ipv4", 00:10:27.194 "trsvcid": "4420", 00:10:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.194 "hdgst": false, 00:10:27.195 "ddgst": false 00:10:27.195 }, 00:10:27.195 "method": "bdev_nvme_attach_controller" 00:10:27.195 }' 00:10:27.452 [2024-11-20 09:44:04.129093] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:10:27.452 [2024-11-20 09:44:04.129172] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668624 ] 00:10:27.452 [2024-11-20 09:44:04.200211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:27.452 [2024-11-20 09:44:04.265400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.452 [2024-11-20 09:44:04.265446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.452 [2024-11-20 09:44:04.265450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.710 I/O targets: 00:10:27.710 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:27.710 00:10:27.710 00:10:27.710 CUnit - A unit testing framework for C - Version 2.1-3 00:10:27.710 http://cunit.sourceforge.net/ 00:10:27.710 00:10:27.710 00:10:27.710 Suite: bdevio tests on: Nvme1n1 00:10:27.710 Test: blockdev write read block ...passed 00:10:27.710 Test: blockdev write zeroes read block ...passed 00:10:27.710 Test: blockdev write zeroes read no split ...passed 00:10:27.968 Test: blockdev write zeroes read split ...passed 00:10:27.968 Test: blockdev write zeroes read split partial ...passed 00:10:27.969 Test: blockdev reset ...[2024-11-20 09:44:04.650145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:27.969 [2024-11-20 09:44:04.650252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad640 (9): Bad file descriptor 00:10:27.969 [2024-11-20 09:44:04.703291] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:27.969 passed 00:10:27.969 Test: blockdev write read 8 blocks ...passed 00:10:27.969 Test: blockdev write read size > 128k ...passed 00:10:27.969 Test: blockdev write read invalid size ...passed 00:10:27.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:27.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:27.969 Test: blockdev write read max offset ...passed 00:10:27.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:27.969 Test: blockdev writev readv 8 blocks ...passed 00:10:27.969 Test: blockdev writev readv 30 x 1block ...passed 00:10:27.969 Test: blockdev writev readv block ...passed 00:10:27.969 Test: blockdev writev readv size > 128k ...passed 00:10:27.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:27.969 Test: blockdev comparev and writev ...[2024-11-20 09:44:04.877697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.969 [2024-11-20 09:44:04.877736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:27.969 [2024-11-20 09:44:04.877760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.969 [2024-11-20 09:44:04.877777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:27.969 [2024-11-20 09:44:04.878074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.969 [2024-11-20 09:44:04.878098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:27.969 [2024-11-20 09:44:04.878121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.969 [2024-11-20 09:44:04.878137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:27.969 [2024-11-20 09:44:04.878441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.969 [2024-11-20 09:44:04.878466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:27.969 [2024-11-20 09:44:04.878488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.969 [2024-11-20 09:44:04.878504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:27.969 [2024-11-20 09:44:04.878822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.969 [2024-11-20 09:44:04.878846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:27.969 [2024-11-20 09:44:04.878868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.969 [2024-11-20 09:44:04.878884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:28.227 passed 00:10:28.227 Test: blockdev nvme passthru rw ...passed 00:10:28.227 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:44:04.961539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:28.227 [2024-11-20 09:44:04.961568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:28.227 [2024-11-20 09:44:04.961715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:28.227 [2024-11-20 09:44:04.961738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:28.227 [2024-11-20 09:44:04.961891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:28.227 [2024-11-20 09:44:04.961915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:28.227 [2024-11-20 09:44:04.962060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:28.227 [2024-11-20 09:44:04.962083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:28.227 passed 00:10:28.227 Test: blockdev nvme admin passthru ...passed 00:10:28.227 Test: blockdev copy ...passed 00:10:28.227 00:10:28.227 Run Summary: Type Total Ran Passed Failed Inactive 00:10:28.227 suites 1 1 n/a 0 0 00:10:28.227 tests 23 23 23 0 0 00:10:28.227 asserts 152 152 152 0 n/a 00:10:28.227 00:10:28.227 Elapsed time = 1.060 seconds 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.485 rmmod nvme_tcp 00:10:28.485 rmmod nvme_fabrics 00:10:28.485 rmmod nvme_keyring 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3668589 ']' 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3668589 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3668589 ']' 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3668589 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3668589 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3668589' 00:10:28.485 killing process with pid 3668589 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3668589 00:10:28.485 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3668589 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.745 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.283 00:10:31.283 real 0m6.411s 00:10:31.283 user 0m9.699s 00:10:31.283 sys 0m2.212s 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.283 ************************************ 00:10:31.283 END TEST nvmf_bdevio 00:10:31.283 ************************************ 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:31.283 00:10:31.283 real 3m57.463s 00:10:31.283 user 10m20.478s 00:10:31.283 sys 1m8.432s 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:31.283 ************************************ 00:10:31.283 END TEST nvmf_target_core 00:10:31.283 ************************************ 00:10:31.283 09:44:07 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:31.283 09:44:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.283 09:44:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.283 09:44:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.283 ************************************ 00:10:31.283 START TEST nvmf_target_extra 00:10:31.283 ************************************ 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:31.283 * Looking for test storage... 00:10:31.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.283 --rc genhtml_branch_coverage=1 00:10:31.283 --rc genhtml_function_coverage=1 00:10:31.283 --rc genhtml_legend=1 00:10:31.283 --rc geninfo_all_blocks=1 00:10:31.283 --rc geninfo_unexecuted_blocks=1 00:10:31.283 00:10:31.283 ' 00:10:31.283 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.283 --rc genhtml_branch_coverage=1 00:10:31.283 --rc genhtml_function_coverage=1 00:10:31.283 --rc genhtml_legend=1 00:10:31.284 --rc geninfo_all_blocks=1 00:10:31.284 --rc geninfo_unexecuted_blocks=1 00:10:31.284 00:10:31.284 ' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.284 --rc genhtml_branch_coverage=1 00:10:31.284 --rc genhtml_function_coverage=1 00:10:31.284 --rc genhtml_legend=1 00:10:31.284 --rc geninfo_all_blocks=1 00:10:31.284 --rc geninfo_unexecuted_blocks=1 00:10:31.284 00:10:31.284 ' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.284 --rc genhtml_branch_coverage=1 00:10:31.284 --rc genhtml_function_coverage=1 00:10:31.284 --rc genhtml_legend=1 00:10:31.284 --rc geninfo_all_blocks=1 00:10:31.284 --rc geninfo_unexecuted_blocks=1 00:10:31.284 00:10:31.284 ' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:31.284 ************************************ 00:10:31.284 START TEST nvmf_example 00:10:31.284 ************************************ 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:31.284 * Looking for test storage... 00:10:31.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.284 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.285 --rc genhtml_branch_coverage=1 00:10:31.285 --rc genhtml_function_coverage=1 00:10:31.285 --rc genhtml_legend=1 00:10:31.285 --rc geninfo_all_blocks=1 00:10:31.285 --rc geninfo_unexecuted_blocks=1 00:10:31.285 00:10:31.285 ' 00:10:31.285 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.285 --rc genhtml_branch_coverage=1 00:10:31.285 --rc genhtml_function_coverage=1 00:10:31.285 --rc genhtml_legend=1 00:10:31.285 --rc geninfo_all_blocks=1 00:10:31.285 --rc geninfo_unexecuted_blocks=1 00:10:31.285 00:10:31.285 ' 00:10:31.285 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.285 --rc genhtml_branch_coverage=1 00:10:31.285 --rc genhtml_function_coverage=1 00:10:31.285 --rc genhtml_legend=1 00:10:31.285 --rc geninfo_all_blocks=1 00:10:31.285 --rc geninfo_unexecuted_blocks=1 00:10:31.285 00:10:31.285 ' 00:10:31.285 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.285 --rc genhtml_branch_coverage=1 00:10:31.285 --rc genhtml_function_coverage=1 00:10:31.285 --rc genhtml_legend=1 00:10:31.285 --rc geninfo_all_blocks=1 00:10:31.285 --rc geninfo_unexecuted_blocks=1 00:10:31.285 00:10:31.285 ' 00:10:31.285 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.285 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.285 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.855 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:33.855 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:33.856 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:33.856 Found net devices under 0000:09:00.0: cvl_0_0 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:33.856 Found net devices under 0000:09:00.1: cvl_0_1 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:33.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:10:33.856 00:10:33.856 --- 10.0.0.2 ping statistics --- 00:10:33.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.856 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:10:33.856 00:10:33.856 --- 10.0.0.1 ping statistics --- 00:10:33.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.856 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.856 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3670882 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3670882 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3670882 ']' 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.857 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.812 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.812 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:34.813 09:44:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:47.008 Initializing NVMe Controllers 00:10:47.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:47.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:47.008 Initialization complete. Launching workers. 00:10:47.008 ======================================================== 00:10:47.008 Latency(us) 00:10:47.008 Device Information : IOPS MiB/s Average min max 00:10:47.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14256.99 55.69 4488.90 881.17 15352.39 00:10:47.008 ======================================================== 00:10:47.008 Total : 14256.99 55.69 4488.90 881.17 15352.39 00:10:47.008 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.008 rmmod nvme_tcp 00:10:47.008 rmmod nvme_fabrics 00:10:47.008 rmmod nvme_keyring 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3670882 ']' 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3670882 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3670882 ']' 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3670882 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3670882 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3670882' 00:10:47.008 killing process with pid 3670882 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3670882 00:10:47.008 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3670882 00:10:47.008 nvmf threads initialize successfully 00:10:47.008 bdev subsystem init successfully 00:10:47.008 created a nvmf target service 00:10:47.008 create targets's poll groups done 00:10:47.008 all subsystems of target started 00:10:47.008 nvmf target is running 00:10:47.008 all subsystems of target stopped 00:10:47.008 destroy targets's poll groups done 00:10:47.008 destroyed the nvmf target service 00:10:47.008 bdev subsystem finish successfully 00:10:47.008 nvmf threads destroy successfully 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.008 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.580 00:10:47.580 real 0m16.381s 00:10:47.580 user 0m45.233s 00:10:47.580 sys 0m3.834s 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.580 ************************************ 00:10:47.580 END TEST nvmf_example 00:10:47.580 ************************************ 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:47.580 ************************************ 00:10:47.580 START TEST nvmf_filesystem 00:10:47.580 ************************************ 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:47.580 * Looking for test storage... 00:10:47.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.580 --rc genhtml_branch_coverage=1 00:10:47.580 --rc genhtml_function_coverage=1 00:10:47.580 --rc genhtml_legend=1 00:10:47.580 --rc geninfo_all_blocks=1 00:10:47.580 --rc geninfo_unexecuted_blocks=1 00:10:47.580 00:10:47.580 ' 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.580 --rc genhtml_branch_coverage=1 00:10:47.580 --rc genhtml_function_coverage=1 00:10:47.580 --rc genhtml_legend=1 00:10:47.580 --rc geninfo_all_blocks=1 00:10:47.580 --rc geninfo_unexecuted_blocks=1 00:10:47.580 00:10:47.580 ' 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.580 --rc genhtml_branch_coverage=1 00:10:47.580 --rc genhtml_function_coverage=1 00:10:47.580 --rc genhtml_legend=1 00:10:47.580 --rc geninfo_all_blocks=1 00:10:47.580 --rc geninfo_unexecuted_blocks=1 00:10:47.580 00:10:47.580 ' 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.580 --rc genhtml_branch_coverage=1 00:10:47.580 --rc genhtml_function_coverage=1 00:10:47.580 --rc genhtml_legend=1 00:10:47.580 --rc geninfo_all_blocks=1 00:10:47.580 --rc geninfo_unexecuted_blocks=1 00:10:47.580 00:10:47.580 ' 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:47.580 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:47.581 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:47.582 #define SPDK_CONFIG_H 00:10:47.582 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:47.582 #define SPDK_CONFIG_APPS 1 00:10:47.582 #define SPDK_CONFIG_ARCH native 00:10:47.582 #undef SPDK_CONFIG_ASAN 00:10:47.582 #undef SPDK_CONFIG_AVAHI 00:10:47.582 #undef SPDK_CONFIG_CET 00:10:47.582 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:47.582 #define SPDK_CONFIG_COVERAGE 1 00:10:47.582 #define SPDK_CONFIG_CROSS_PREFIX 00:10:47.582 #undef SPDK_CONFIG_CRYPTO 00:10:47.582 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:47.582 #undef SPDK_CONFIG_CUSTOMOCF 00:10:47.582 #undef SPDK_CONFIG_DAOS 00:10:47.582 #define SPDK_CONFIG_DAOS_DIR 00:10:47.582 #define SPDK_CONFIG_DEBUG 1 00:10:47.582 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:47.582 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:47.582 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:47.582 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:47.582 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:47.582 #undef SPDK_CONFIG_DPDK_UADK 00:10:47.582 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:47.582 #define SPDK_CONFIG_EXAMPLES 1 00:10:47.582 #undef SPDK_CONFIG_FC 00:10:47.582 #define SPDK_CONFIG_FC_PATH 00:10:47.582 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:47.582 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:47.582 #define SPDK_CONFIG_FSDEV 1 00:10:47.582 #undef SPDK_CONFIG_FUSE 00:10:47.582 #undef SPDK_CONFIG_FUZZER 00:10:47.582 #define SPDK_CONFIG_FUZZER_LIB 00:10:47.582 #undef SPDK_CONFIG_GOLANG 00:10:47.582 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:47.582 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:47.582 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:47.582 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:47.582 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:47.582 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:47.582 #undef SPDK_CONFIG_HAVE_LZ4 00:10:47.582 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:47.582 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:47.582 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:47.582 #define SPDK_CONFIG_IDXD 1 00:10:47.582 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:47.582 #undef SPDK_CONFIG_IPSEC_MB 00:10:47.582 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:47.582 #define SPDK_CONFIG_ISAL 1 00:10:47.582 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:47.582 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:47.582 #define SPDK_CONFIG_LIBDIR 00:10:47.582 #undef SPDK_CONFIG_LTO 00:10:47.582 #define SPDK_CONFIG_MAX_LCORES 128 00:10:47.582 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:47.582 #define SPDK_CONFIG_NVME_CUSE 1 00:10:47.582 #undef SPDK_CONFIG_OCF 00:10:47.582 #define SPDK_CONFIG_OCF_PATH 00:10:47.582 #define SPDK_CONFIG_OPENSSL_PATH 00:10:47.582 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:47.582 #define SPDK_CONFIG_PGO_DIR 00:10:47.582 #undef SPDK_CONFIG_PGO_USE 00:10:47.582 #define SPDK_CONFIG_PREFIX /usr/local 00:10:47.582 #undef SPDK_CONFIG_RAID5F 00:10:47.582 #undef SPDK_CONFIG_RBD 00:10:47.582 #define SPDK_CONFIG_RDMA 1 00:10:47.582 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:47.582 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:47.582 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:47.582 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:47.582 #define SPDK_CONFIG_SHARED 1 00:10:47.582 #undef SPDK_CONFIG_SMA 00:10:47.582 #define SPDK_CONFIG_TESTS 1 00:10:47.582 #undef SPDK_CONFIG_TSAN 00:10:47.582 #define SPDK_CONFIG_UBLK 1 00:10:47.582 #define SPDK_CONFIG_UBSAN 1 00:10:47.582 #undef SPDK_CONFIG_UNIT_TESTS 00:10:47.582 #undef SPDK_CONFIG_URING 00:10:47.582 #define SPDK_CONFIG_URING_PATH 00:10:47.582 #undef SPDK_CONFIG_URING_ZNS 00:10:47.582 #undef SPDK_CONFIG_USDT 00:10:47.582 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:47.582 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:47.582 #define SPDK_CONFIG_VFIO_USER 1 00:10:47.582 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:47.582 #define SPDK_CONFIG_VHOST 1 00:10:47.582 #define SPDK_CONFIG_VIRTIO 1 00:10:47.582 #undef SPDK_CONFIG_VTUNE 00:10:47.582 #define SPDK_CONFIG_VTUNE_DIR 00:10:47.582 #define SPDK_CONFIG_WERROR 1 00:10:47.582 #define SPDK_CONFIG_WPDK_DIR 00:10:47.582 #undef SPDK_CONFIG_XNVME 00:10:47.582 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.582 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:47.583 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:47.846 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:47.847 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3672588 ]] 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3672588 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:47.848 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.g5NQL1 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.g5NQL1/tests/target /tmp/spdk.g5NQL1 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50798071808 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11190448128 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375265280 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22441984 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=29919698944 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074561024 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:47.849 * Looking for test storage... 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=50798071808 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13405040640 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.849 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.850 --rc genhtml_branch_coverage=1 00:10:47.850 --rc genhtml_function_coverage=1 00:10:47.850 --rc genhtml_legend=1 00:10:47.850 --rc geninfo_all_blocks=1 00:10:47.850 --rc geninfo_unexecuted_blocks=1 00:10:47.850 00:10:47.850 ' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.850 --rc genhtml_branch_coverage=1 00:10:47.850 --rc genhtml_function_coverage=1 00:10:47.850 --rc genhtml_legend=1 00:10:47.850 --rc geninfo_all_blocks=1 00:10:47.850 --rc geninfo_unexecuted_blocks=1 00:10:47.850 00:10:47.850 ' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.850 --rc genhtml_branch_coverage=1 00:10:47.850 --rc genhtml_function_coverage=1 00:10:47.850 --rc genhtml_legend=1 00:10:47.850 --rc geninfo_all_blocks=1 00:10:47.850 --rc geninfo_unexecuted_blocks=1 00:10:47.850 00:10:47.850 ' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.850 --rc genhtml_branch_coverage=1 00:10:47.850 --rc genhtml_function_coverage=1 00:10:47.850 --rc genhtml_legend=1 00:10:47.850 --rc geninfo_all_blocks=1 00:10:47.850 --rc geninfo_unexecuted_blocks=1 00:10:47.850 00:10:47.850 ' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.850 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.851 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:50.387 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:50.387 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.387 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:50.388 Found net devices under 0000:09:00.0: cvl_0_0 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:50.388 Found net devices under 0000:09:00.1: cvl_0_1 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.388 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:10:50.388 00:10:50.388 --- 10.0.0.2 ping statistics --- 00:10:50.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.388 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:10:50.388 00:10:50.388 --- 10.0.0.1 ping statistics --- 00:10:50.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.388 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.388 ************************************ 00:10:50.388 START TEST nvmf_filesystem_no_in_capsule 00:10:50.388 ************************************ 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3674345 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3674345 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3674345 ']' 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.388 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.388 [2024-11-20 09:44:27.171759] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:10:50.388 [2024-11-20 09:44:27.171834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.389 [2024-11-20 09:44:27.249754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.647 [2024-11-20 09:44:27.313322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.647 [2024-11-20 09:44:27.313388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.647 [2024-11-20 09:44:27.313402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.647 [2024-11-20 09:44:27.313413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.647 [2024-11-20 09:44:27.313423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.647 [2024-11-20 09:44:27.315040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.647 [2024-11-20 09:44:27.315104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.647 [2024-11-20 09:44:27.315153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.647 [2024-11-20 09:44:27.315150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.647 [2024-11-20 09:44:27.471396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.647 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.905 Malloc1 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.905 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.906 [2024-11-20 09:44:27.664864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:50.906 { 00:10:50.906 "name": "Malloc1", 00:10:50.906 "aliases": [ 00:10:50.906 "91c5bcdc-5fe9-454e-9056-6d460ac294bc" 00:10:50.906 ], 00:10:50.906 "product_name": "Malloc disk", 00:10:50.906 "block_size": 512, 00:10:50.906 "num_blocks": 1048576, 00:10:50.906 "uuid": "91c5bcdc-5fe9-454e-9056-6d460ac294bc", 00:10:50.906 "assigned_rate_limits": { 00:10:50.906 "rw_ios_per_sec": 0, 00:10:50.906 "rw_mbytes_per_sec": 0, 00:10:50.906 "r_mbytes_per_sec": 0, 00:10:50.906 "w_mbytes_per_sec": 0 00:10:50.906 }, 00:10:50.906 "claimed": true, 00:10:50.906 "claim_type": "exclusive_write", 00:10:50.906 "zoned": false, 00:10:50.906 "supported_io_types": { 00:10:50.906 "read": true, 00:10:50.906 "write": true, 00:10:50.906 "unmap": true, 00:10:50.906 "flush": true, 00:10:50.906 "reset": true, 00:10:50.906 "nvme_admin": false, 00:10:50.906 "nvme_io": false, 00:10:50.906 "nvme_io_md": false, 00:10:50.906 "write_zeroes": true, 00:10:50.906 "zcopy": true, 00:10:50.906 "get_zone_info": false, 00:10:50.906 "zone_management": false, 00:10:50.906 "zone_append": false, 00:10:50.906 "compare": false, 00:10:50.906 "compare_and_write": false, 00:10:50.906 "abort": true, 00:10:50.906 "seek_hole": false, 00:10:50.906 "seek_data": false, 00:10:50.906 "copy": true, 00:10:50.906 "nvme_iov_md": false 00:10:50.906 }, 00:10:50.906 "memory_domains": [ 00:10:50.906 { 00:10:50.906 "dma_device_id": "system", 00:10:50.906 "dma_device_type": 1 00:10:50.906 }, 00:10:50.906 { 00:10:50.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.906 "dma_device_type": 2 00:10:50.906 } 00:10:50.906 ], 00:10:50.906 "driver_specific": {} 00:10:50.906 } 00:10:50.906 ]' 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:50.906 09:44:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.839 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.839 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:51.839 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.839 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:51.839 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:53.737 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:53.738 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:53.995 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:54.929 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.862 ************************************ 00:10:55.862 START TEST filesystem_ext4 00:10:55.862 ************************************ 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:55.862 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:55.862 mke2fs 1.47.0 (5-Feb-2023) 00:10:55.862 Discarding device blocks: 0/522240 done 00:10:55.862 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:55.862 Filesystem UUID: 08aa016d-b110-478e-ba01-5554917075f1 00:10:55.862 Superblock backups stored on blocks: 00:10:55.862 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:55.862 00:10:55.862 Allocating group tables: 0/64 done 00:10:55.862 Writing inode tables: 0/64 done 00:10:59.141 Creating journal (8192 blocks): done 00:11:00.899 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:00.899 00:11:00.899 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:00.899 09:44:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3674345 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.454 00:11:07.454 real 0m10.711s 00:11:07.454 user 0m0.022s 00:11:07.454 sys 0m0.063s 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:07.454 ************************************ 00:11:07.454 END TEST filesystem_ext4 00:11:07.454 ************************************ 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.454 ************************************ 00:11:07.454 START TEST filesystem_btrfs 00:11:07.454 ************************************ 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:07.454 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:07.454 btrfs-progs v6.8.1 00:11:07.454 See https://btrfs.readthedocs.io for more information. 00:11:07.454 00:11:07.454 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:07.454 NOTE: several default settings have changed in version 5.15, please make sure 00:11:07.454 this does not affect your deployments: 00:11:07.454 - DUP for metadata (-m dup) 00:11:07.454 - enabled no-holes (-O no-holes) 00:11:07.454 - enabled free-space-tree (-R free-space-tree) 00:11:07.454 00:11:07.454 Label: (null) 00:11:07.455 UUID: 50cdff41-ce3a-4572-a5cc-fee932f09b88 00:11:07.455 Node size: 16384 00:11:07.455 Sector size: 4096 (CPU page size: 4096) 00:11:07.455 Filesystem size: 510.00MiB 00:11:07.455 Block group profiles: 00:11:07.455 Data: single 8.00MiB 00:11:07.455 Metadata: DUP 32.00MiB 00:11:07.455 System: DUP 8.00MiB 00:11:07.455 SSD detected: yes 00:11:07.455 Zoned device: no 00:11:07.455 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:07.455 Checksum: crc32c 00:11:07.455 Number of devices: 1 00:11:07.455 Devices: 00:11:07.455 ID SIZE PATH 00:11:07.455 1 510.00MiB /dev/nvme0n1p1 00:11:07.455 00:11:07.455 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:07.455 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3674345 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.019 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.019 00:11:08.019 real 0m1.444s 00:11:08.019 user 0m0.018s 00:11:08.019 sys 0m0.112s 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:08.020 ************************************ 00:11:08.020 END TEST filesystem_btrfs 00:11:08.020 ************************************ 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.020 ************************************ 00:11:08.020 START TEST filesystem_xfs 00:11:08.020 ************************************ 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:08.020 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:08.020 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:08.020 = sectsz=512 attr=2, projid32bit=1 00:11:08.020 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:08.020 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:08.020 data = bsize=4096 blocks=130560, imaxpct=25 00:11:08.020 = sunit=0 swidth=0 blks 00:11:08.020 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:08.020 log =internal log bsize=4096 blocks=16384, version=2 00:11:08.020 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:08.020 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:08.952 Discarding blocks...Done. 00:11:08.952 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:08.952 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3674345 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.850 00:11:10.850 real 0m2.802s 00:11:10.850 user 0m0.018s 00:11:10.850 sys 0m0.060s 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.850 ************************************ 00:11:10.850 END TEST filesystem_xfs 00:11:10.850 ************************************ 00:11:10.850 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3674345 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3674345 ']' 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3674345 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674345 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674345' 00:11:11.107 killing process with pid 3674345 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3674345 00:11:11.107 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3674345 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:11.672 00:11:11.672 real 0m21.331s 00:11:11.672 user 1m22.764s 00:11:11.672 sys 0m2.459s 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.672 ************************************ 00:11:11.672 END TEST nvmf_filesystem_no_in_capsule 00:11:11.672 ************************************ 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.672 ************************************ 00:11:11.672 START TEST nvmf_filesystem_in_capsule 00:11:11.672 ************************************ 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3677010 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3677010 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3677010 ']' 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.672 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.673 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.673 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.673 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.673 [2024-11-20 09:44:48.564579] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:11:11.673 [2024-11-20 09:44:48.564674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.930 [2024-11-20 09:44:48.638091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.930 [2024-11-20 09:44:48.699399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.930 [2024-11-20 09:44:48.699451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.930 [2024-11-20 09:44:48.699466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.930 [2024-11-20 09:44:48.699478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.930 [2024-11-20 09:44:48.699489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.930 [2024-11-20 09:44:48.701097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.930 [2024-11-20 09:44:48.701138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.930 [2024-11-20 09:44:48.701193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.930 [2024-11-20 09:44:48.701197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.930 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.930 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:11.930 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.930 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.930 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.188 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.188 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:12.188 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:12.189 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.189 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.189 [2024-11-20 09:44:48.846819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.189 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.189 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:12.189 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.189 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.189 Malloc1 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.189 [2024-11-20 09:44:49.026991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:12.189 { 00:11:12.189 "name": "Malloc1", 00:11:12.189 "aliases": [ 00:11:12.189 "40314c42-3c35-403d-bfbe-e363f43ab37c" 00:11:12.189 ], 00:11:12.189 "product_name": "Malloc disk", 00:11:12.189 "block_size": 512, 00:11:12.189 "num_blocks": 1048576, 00:11:12.189 "uuid": "40314c42-3c35-403d-bfbe-e363f43ab37c", 00:11:12.189 "assigned_rate_limits": { 00:11:12.189 "rw_ios_per_sec": 0, 00:11:12.189 "rw_mbytes_per_sec": 0, 00:11:12.189 "r_mbytes_per_sec": 0, 00:11:12.189 "w_mbytes_per_sec": 0 00:11:12.189 }, 00:11:12.189 "claimed": true, 00:11:12.189 "claim_type": "exclusive_write", 00:11:12.189 "zoned": false, 00:11:12.189 "supported_io_types": { 00:11:12.189 "read": true, 00:11:12.189 "write": true, 00:11:12.189 "unmap": true, 00:11:12.189 "flush": true, 00:11:12.189 "reset": true, 00:11:12.189 "nvme_admin": false, 00:11:12.189 "nvme_io": false, 00:11:12.189 "nvme_io_md": false, 00:11:12.189 "write_zeroes": true, 00:11:12.189 "zcopy": true, 00:11:12.189 "get_zone_info": false, 00:11:12.189 "zone_management": false, 00:11:12.189 "zone_append": false, 00:11:12.189 "compare": false, 00:11:12.189 "compare_and_write": false, 00:11:12.189 "abort": true, 00:11:12.189 "seek_hole": false, 00:11:12.189 "seek_data": false, 00:11:12.189 "copy": true, 00:11:12.189 "nvme_iov_md": false 00:11:12.189 }, 00:11:12.189 "memory_domains": [ 00:11:12.189 { 00:11:12.189 "dma_device_id": "system", 00:11:12.189 "dma_device_type": 1 00:11:12.189 }, 00:11:12.189 { 00:11:12.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.189 "dma_device_type": 2 00:11:12.189 } 00:11:12.189 ], 00:11:12.189 "driver_specific": {} 00:11:12.189 } 00:11:12.189 ]' 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:12.189 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:12.448 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:12.448 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:12.448 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:12.448 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:12.449 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.015 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.015 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.015 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.015 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:13.015 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:14.982 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:15.547 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:16.113 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:17.047 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.048 ************************************ 00:11:17.048 START TEST filesystem_in_capsule_ext4 00:11:17.048 ************************************ 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:17.048 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:17.048 mke2fs 1.47.0 (5-Feb-2023) 00:11:17.306 Discarding device blocks: 0/522240 done 00:11:17.306 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:17.306 Filesystem UUID: d9187924-0822-4ca3-acf2-8bb318f4ec3d 00:11:17.306 Superblock backups stored on blocks: 00:11:17.306 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:17.306 00:11:17.306 Allocating group tables: 0/64 done 00:11:17.306 Writing inode tables: 0/64 done 00:11:18.238 Creating journal (8192 blocks): done 00:11:18.238 Writing superblocks and filesystem accounting information: 0/64 done 00:11:18.238 00:11:18.238 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:18.238 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3677010 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:23.497 00:11:23.497 real 0m6.401s 00:11:23.497 user 0m0.024s 00:11:23.497 sys 0m0.053s 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:23.497 ************************************ 00:11:23.497 END TEST filesystem_in_capsule_ext4 00:11:23.497 ************************************ 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.497 ************************************ 00:11:23.497 START TEST filesystem_in_capsule_btrfs 00:11:23.497 ************************************ 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:23.497 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:23.755 btrfs-progs v6.8.1 00:11:23.755 See https://btrfs.readthedocs.io for more information. 00:11:23.755 00:11:23.755 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:23.755 NOTE: several default settings have changed in version 5.15, please make sure 00:11:23.755 this does not affect your deployments: 00:11:23.755 - DUP for metadata (-m dup) 00:11:23.755 - enabled no-holes (-O no-holes) 00:11:23.755 - enabled free-space-tree (-R free-space-tree) 00:11:23.755 00:11:23.755 Label: (null) 00:11:23.755 UUID: 387dd0ed-c0bc-4974-9a57-2a8866f0685b 00:11:23.755 Node size: 16384 00:11:23.755 Sector size: 4096 (CPU page size: 4096) 00:11:23.755 Filesystem size: 510.00MiB 00:11:23.755 Block group profiles: 00:11:23.755 Data: single 8.00MiB 00:11:23.755 Metadata: DUP 32.00MiB 00:11:23.755 System: DUP 8.00MiB 00:11:23.755 SSD detected: yes 00:11:23.755 Zoned device: no 00:11:23.755 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:23.755 Checksum: crc32c 00:11:23.755 Number of devices: 1 00:11:23.755 Devices: 00:11:23.755 ID SIZE PATH 00:11:23.755 1 510.00MiB /dev/nvme0n1p1 00:11:23.755 00:11:23.755 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:23.755 09:45:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3677010 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:24.688 00:11:24.688 real 0m1.009s 00:11:24.688 user 0m0.014s 00:11:24.688 sys 0m0.104s 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:24.688 ************************************ 00:11:24.688 END TEST filesystem_in_capsule_btrfs 00:11:24.688 ************************************ 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.688 ************************************ 00:11:24.688 START TEST filesystem_in_capsule_xfs 00:11:24.688 ************************************ 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:24.688 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:24.688 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:24.688 = sectsz=512 attr=2, projid32bit=1 00:11:24.688 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:24.688 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:24.688 data = bsize=4096 blocks=130560, imaxpct=25 00:11:24.688 = sunit=0 swidth=0 blks 00:11:24.688 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:24.688 log =internal log bsize=4096 blocks=16384, version=2 00:11:24.688 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:24.688 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:25.620 Discarding blocks...Done. 00:11:25.620 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:25.620 09:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.146 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.146 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:28.146 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.146 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:28.146 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:28.146 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.146 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3677010 00:11:28.147 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.147 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.147 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.147 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.147 00:11:28.147 real 0m3.424s 00:11:28.147 user 0m0.013s 00:11:28.147 sys 0m0.061s 00:11:28.147 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.147 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:28.147 ************************************ 00:11:28.147 END TEST filesystem_in_capsule_xfs 00:11:28.147 ************************************ 00:11:28.147 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.404 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3677010 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3677010 ']' 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3677010 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3677010 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3677010' 00:11:28.405 killing process with pid 3677010 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3677010 00:11:28.405 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3677010 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:28.971 00:11:28.971 real 0m17.164s 00:11:28.971 user 1m6.399s 00:11:28.971 sys 0m2.134s 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.971 ************************************ 00:11:28.971 END TEST nvmf_filesystem_in_capsule 00:11:28.971 ************************************ 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.971 rmmod nvme_tcp 00:11:28.971 rmmod nvme_fabrics 00:11:28.971 rmmod nvme_keyring 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.971 09:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.510 00:11:31.510 real 0m43.497s 00:11:31.510 user 2m30.298s 00:11:31.510 sys 0m6.480s 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.510 ************************************ 00:11:31.510 END TEST nvmf_filesystem 00:11:31.510 ************************************ 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.510 ************************************ 00:11:31.510 START TEST nvmf_target_discovery 00:11:31.510 ************************************ 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:31.510 * Looking for test storage... 00:11:31.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.510 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.511 --rc genhtml_branch_coverage=1 00:11:31.511 --rc genhtml_function_coverage=1 00:11:31.511 --rc genhtml_legend=1 00:11:31.511 --rc geninfo_all_blocks=1 00:11:31.511 --rc geninfo_unexecuted_blocks=1 00:11:31.511 00:11:31.511 ' 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.511 --rc genhtml_branch_coverage=1 00:11:31.511 --rc genhtml_function_coverage=1 00:11:31.511 --rc genhtml_legend=1 00:11:31.511 --rc geninfo_all_blocks=1 00:11:31.511 --rc geninfo_unexecuted_blocks=1 00:11:31.511 00:11:31.511 ' 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.511 --rc genhtml_branch_coverage=1 00:11:31.511 --rc genhtml_function_coverage=1 00:11:31.511 --rc genhtml_legend=1 00:11:31.511 --rc geninfo_all_blocks=1 00:11:31.511 --rc geninfo_unexecuted_blocks=1 00:11:31.511 00:11:31.511 ' 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.511 --rc genhtml_branch_coverage=1 00:11:31.511 --rc genhtml_function_coverage=1 00:11:31.511 --rc genhtml_legend=1 00:11:31.511 --rc geninfo_all_blocks=1 00:11:31.511 --rc geninfo_unexecuted_blocks=1 00:11:31.511 00:11:31.511 ' 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.511 09:45:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.511 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.512 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:33.423 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:33.423 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:33.423 Found net devices under 0000:09:00.0: cvl_0_0 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.423 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:33.424 Found net devices under 0000:09:00.1: cvl_0_1 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:11:33.424 00:11:33.424 --- 10.0.0.2 ping statistics --- 00:11:33.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.424 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:11:33.424 00:11:33.424 --- 10.0.0.1 ping statistics --- 00:11:33.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.424 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3681795 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3681795 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3681795 ']' 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.424 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.424 [2024-11-20 09:45:10.286988] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:11:33.424 [2024-11-20 09:45:10.287077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.683 [2024-11-20 09:45:10.365991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.683 [2024-11-20 09:45:10.426887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.683 [2024-11-20 09:45:10.426942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.683 [2024-11-20 09:45:10.426971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.683 [2024-11-20 09:45:10.426982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.683 [2024-11-20 09:45:10.426991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.683 [2024-11-20 09:45:10.428738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.683 [2024-11-20 09:45:10.428784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.683 [2024-11-20 09:45:10.428842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.683 [2024-11-20 09:45:10.428846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.683 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.683 [2024-11-20 09:45:10.589138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.941 Null1 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.941 [2024-11-20 09:45:10.629493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.941 Null2 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:33.941 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 Null3 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 Null4 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.942 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:11:34.200 00:11:34.200 Discovery Log Number of Records 6, Generation counter 6 00:11:34.200 =====Discovery Log Entry 0====== 00:11:34.200 trtype: tcp 00:11:34.200 adrfam: ipv4 00:11:34.200 subtype: current discovery subsystem 00:11:34.200 treq: not required 00:11:34.200 portid: 0 00:11:34.200 trsvcid: 4420 00:11:34.200 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:34.200 traddr: 10.0.0.2 00:11:34.200 eflags: explicit discovery connections, duplicate discovery information 00:11:34.200 sectype: none 00:11:34.200 =====Discovery Log Entry 1====== 00:11:34.200 trtype: tcp 00:11:34.200 adrfam: ipv4 00:11:34.200 subtype: nvme subsystem 00:11:34.200 treq: not required 00:11:34.200 portid: 0 00:11:34.200 trsvcid: 4420 00:11:34.200 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:34.200 traddr: 10.0.0.2 00:11:34.200 eflags: none 00:11:34.200 sectype: none 00:11:34.200 =====Discovery Log Entry 2====== 00:11:34.200 trtype: tcp 00:11:34.200 adrfam: ipv4 00:11:34.200 subtype: nvme subsystem 00:11:34.200 treq: not required 00:11:34.200 portid: 0 00:11:34.200 trsvcid: 4420 00:11:34.200 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:34.200 traddr: 10.0.0.2 00:11:34.200 eflags: none 00:11:34.200 sectype: none 00:11:34.200 =====Discovery Log Entry 3====== 00:11:34.200 trtype: tcp 00:11:34.200 adrfam: ipv4 00:11:34.200 subtype: nvme subsystem 00:11:34.200 treq: not required 00:11:34.200 portid: 0 00:11:34.200 trsvcid: 4420 00:11:34.200 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:34.200 traddr: 10.0.0.2 00:11:34.200 eflags: none 00:11:34.200 sectype: none 00:11:34.200 =====Discovery Log Entry 4====== 00:11:34.200 trtype: tcp 00:11:34.200 adrfam: ipv4 00:11:34.200 subtype: nvme subsystem 00:11:34.200 treq: not required 00:11:34.200 portid: 0 00:11:34.200 trsvcid: 4420 00:11:34.200 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:34.200 traddr: 10.0.0.2 00:11:34.200 eflags: none 00:11:34.200 sectype: none 00:11:34.200 =====Discovery Log Entry 5====== 00:11:34.200 trtype: tcp 00:11:34.200 adrfam: ipv4 00:11:34.200 subtype: discovery subsystem referral 00:11:34.200 treq: not required 00:11:34.200 portid: 0 00:11:34.200 trsvcid: 4430 00:11:34.200 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:34.200 traddr: 10.0.0.2 00:11:34.200 eflags: none 00:11:34.200 sectype: none 00:11:34.200 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:34.200 Perform nvmf subsystem discovery via RPC 00:11:34.200 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 [ 00:11:34.201 { 00:11:34.201 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:34.201 "subtype": "Discovery", 00:11:34.201 "listen_addresses": [ 00:11:34.201 { 00:11:34.201 "trtype": "TCP", 00:11:34.201 "adrfam": "IPv4", 00:11:34.201 "traddr": "10.0.0.2", 00:11:34.201 "trsvcid": "4420" 00:11:34.201 } 00:11:34.201 ], 00:11:34.201 "allow_any_host": true, 00:11:34.201 "hosts": [] 00:11:34.201 }, 00:11:34.201 { 00:11:34.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.201 "subtype": "NVMe", 00:11:34.201 "listen_addresses": [ 00:11:34.201 { 00:11:34.201 "trtype": "TCP", 00:11:34.201 "adrfam": "IPv4", 00:11:34.201 "traddr": "10.0.0.2", 00:11:34.201 "trsvcid": "4420" 00:11:34.201 } 00:11:34.201 ], 00:11:34.201 "allow_any_host": true, 00:11:34.201 "hosts": [], 00:11:34.201 "serial_number": "SPDK00000000000001", 00:11:34.201 "model_number": "SPDK bdev Controller", 00:11:34.201 "max_namespaces": 32, 00:11:34.201 "min_cntlid": 1, 00:11:34.201 "max_cntlid": 65519, 00:11:34.201 "namespaces": [ 00:11:34.201 { 00:11:34.201 "nsid": 1, 00:11:34.201 "bdev_name": "Null1", 00:11:34.201 "name": "Null1", 00:11:34.201 "nguid": "F8A8EC7CD8CF46FD8523AF74703721E6", 00:11:34.201 "uuid": "f8a8ec7c-d8cf-46fd-8523-af74703721e6" 00:11:34.201 } 00:11:34.201 ] 00:11:34.201 }, 00:11:34.201 { 00:11:34.201 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:34.201 "subtype": "NVMe", 00:11:34.201 "listen_addresses": [ 00:11:34.201 { 00:11:34.201 "trtype": "TCP", 00:11:34.201 "adrfam": "IPv4", 00:11:34.201 "traddr": "10.0.0.2", 00:11:34.201 "trsvcid": "4420" 00:11:34.201 } 00:11:34.201 ], 00:11:34.201 "allow_any_host": true, 00:11:34.201 "hosts": [], 00:11:34.201 "serial_number": "SPDK00000000000002", 00:11:34.201 "model_number": "SPDK bdev Controller", 00:11:34.201 "max_namespaces": 32, 00:11:34.201 "min_cntlid": 1, 00:11:34.201 "max_cntlid": 65519, 00:11:34.201 "namespaces": [ 00:11:34.201 { 00:11:34.201 "nsid": 1, 00:11:34.201 "bdev_name": "Null2", 00:11:34.201 "name": "Null2", 00:11:34.201 "nguid": "1F745B87C23041079161CE8F3805BE51", 00:11:34.201 "uuid": "1f745b87-c230-4107-9161-ce8f3805be51" 00:11:34.201 } 00:11:34.201 ] 00:11:34.201 }, 00:11:34.201 { 00:11:34.201 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:34.201 "subtype": "NVMe", 00:11:34.201 "listen_addresses": [ 00:11:34.201 { 00:11:34.201 "trtype": "TCP", 00:11:34.201 "adrfam": "IPv4", 00:11:34.201 "traddr": "10.0.0.2", 00:11:34.201 "trsvcid": "4420" 00:11:34.201 } 00:11:34.201 ], 00:11:34.201 "allow_any_host": true, 00:11:34.201 "hosts": [], 00:11:34.201 "serial_number": "SPDK00000000000003", 00:11:34.201 "model_number": "SPDK bdev Controller", 00:11:34.201 "max_namespaces": 32, 00:11:34.201 "min_cntlid": 1, 00:11:34.201 "max_cntlid": 65519, 00:11:34.201 "namespaces": [ 00:11:34.201 { 00:11:34.201 "nsid": 1, 00:11:34.201 "bdev_name": "Null3", 00:11:34.201 "name": "Null3", 00:11:34.201 "nguid": "18AC6A7E6EBE46DB929757564D2EE1F8", 00:11:34.201 "uuid": "18ac6a7e-6ebe-46db-9297-57564d2ee1f8" 00:11:34.201 } 00:11:34.201 ] 00:11:34.201 }, 00:11:34.201 { 00:11:34.201 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:34.201 "subtype": "NVMe", 00:11:34.201 "listen_addresses": [ 00:11:34.201 { 00:11:34.201 "trtype": "TCP", 00:11:34.201 "adrfam": "IPv4", 00:11:34.201 "traddr": "10.0.0.2", 00:11:34.201 "trsvcid": "4420" 00:11:34.201 } 00:11:34.201 ], 00:11:34.201 "allow_any_host": true, 00:11:34.201 "hosts": [], 00:11:34.201 "serial_number": "SPDK00000000000004", 00:11:34.201 "model_number": "SPDK bdev Controller", 00:11:34.201 "max_namespaces": 32, 00:11:34.201 "min_cntlid": 1, 00:11:34.201 "max_cntlid": 65519, 00:11:34.201 "namespaces": [ 00:11:34.201 { 00:11:34.201 "nsid": 1, 00:11:34.201 "bdev_name": "Null4", 00:11:34.201 "name": "Null4", 00:11:34.201 "nguid": "32292E08DA264157A23C595F49A82748", 00:11:34.201 "uuid": "32292e08-da26-4157-a23c-595f49a82748" 00:11:34.201 } 00:11:34.201 ] 00:11:34.201 } 00:11:34.201 ] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.201 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.202 rmmod nvme_tcp 00:11:34.202 rmmod nvme_fabrics 00:11:34.202 rmmod nvme_keyring 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3681795 ']' 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3681795 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3681795 ']' 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3681795 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.202 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3681795 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3681795' 00:11:34.459 killing process with pid 3681795 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3681795 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3681795 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:34.459 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:34.717 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.717 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.717 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.717 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.717 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.717 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.717 09:45:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.624 00:11:36.624 real 0m5.566s 00:11:36.624 user 0m4.663s 00:11:36.624 sys 0m1.942s 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:36.624 ************************************ 00:11:36.624 END TEST nvmf_target_discovery 00:11:36.624 ************************************ 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.624 ************************************ 00:11:36.624 START TEST nvmf_referrals 00:11:36.624 ************************************ 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:36.624 * Looking for test storage... 00:11:36.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.624 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.625 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:36.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.885 --rc genhtml_branch_coverage=1 00:11:36.885 --rc genhtml_function_coverage=1 00:11:36.885 --rc genhtml_legend=1 00:11:36.885 --rc geninfo_all_blocks=1 00:11:36.885 --rc geninfo_unexecuted_blocks=1 00:11:36.885 00:11:36.885 ' 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:36.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.885 --rc genhtml_branch_coverage=1 00:11:36.885 --rc genhtml_function_coverage=1 00:11:36.885 --rc genhtml_legend=1 00:11:36.885 --rc geninfo_all_blocks=1 00:11:36.885 --rc geninfo_unexecuted_blocks=1 00:11:36.885 00:11:36.885 ' 00:11:36.885 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:36.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.885 --rc genhtml_branch_coverage=1 00:11:36.885 --rc genhtml_function_coverage=1 00:11:36.885 --rc genhtml_legend=1 00:11:36.885 --rc geninfo_all_blocks=1 00:11:36.885 --rc geninfo_unexecuted_blocks=1 00:11:36.886 00:11:36.886 ' 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:36.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.886 --rc genhtml_branch_coverage=1 00:11:36.886 --rc genhtml_function_coverage=1 00:11:36.886 --rc genhtml_legend=1 00:11:36.886 --rc geninfo_all_blocks=1 00:11:36.886 --rc geninfo_unexecuted_blocks=1 00:11:36.886 00:11:36.886 ' 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.886 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:39.417 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:39.417 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.417 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:39.418 Found net devices under 0000:09:00.0: cvl_0_0 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:39.418 Found net devices under 0000:09:00.1: cvl_0_1 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.418 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:11:39.418 00:11:39.418 --- 10.0.0.2 ping statistics --- 00:11:39.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.418 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:11:39.418 00:11:39.418 --- 10.0.0.1 ping statistics --- 00:11:39.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.418 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3684010 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3684010 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3684010 ']' 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.418 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.418 [2024-11-20 09:45:16.091476] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:11:39.418 [2024-11-20 09:45:16.091558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.418 [2024-11-20 09:45:16.158764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.418 [2024-11-20 09:45:16.213692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.418 [2024-11-20 09:45:16.213742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.418 [2024-11-20 09:45:16.213769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.418 [2024-11-20 09:45:16.213780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.418 [2024-11-20 09:45:16.213789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.418 [2024-11-20 09:45:16.215396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.418 [2024-11-20 09:45:16.215464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.418 [2024-11-20 09:45:16.215485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.418 [2024-11-20 09:45:16.215488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.676 [2024-11-20 09:45:16.362979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.676 [2024-11-20 09:45:16.375212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.676 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.933 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:39.933 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:39.933 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.934 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.191 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.192 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.450 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.708 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.965 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:40.965 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:40.965 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:40.965 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:40.965 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:40.965 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.965 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:40.965 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:40.966 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:40.966 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:40.966 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:40.966 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.966 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:41.223 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:41.223 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:41.223 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.223 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:41.223 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.481 rmmod nvme_tcp 00:11:41.481 rmmod nvme_fabrics 00:11:41.481 rmmod nvme_keyring 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3684010 ']' 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3684010 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3684010 ']' 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3684010 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3684010 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3684010' 00:11:41.481 killing process with pid 3684010 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3684010 00:11:41.481 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3684010 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.740 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.271 00:11:44.271 real 0m7.170s 00:11:44.271 user 0m11.060s 00:11:44.271 sys 0m2.416s 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.271 ************************************ 00:11:44.271 END TEST nvmf_referrals 00:11:44.271 ************************************ 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.271 ************************************ 00:11:44.271 START TEST nvmf_connect_disconnect 00:11:44.271 ************************************ 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.271 * Looking for test storage... 00:11:44.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.271 --rc genhtml_branch_coverage=1 00:11:44.271 --rc genhtml_function_coverage=1 00:11:44.271 --rc genhtml_legend=1 00:11:44.271 --rc geninfo_all_blocks=1 00:11:44.271 --rc geninfo_unexecuted_blocks=1 00:11:44.271 00:11:44.271 ' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.271 --rc genhtml_branch_coverage=1 00:11:44.271 --rc genhtml_function_coverage=1 00:11:44.271 --rc genhtml_legend=1 00:11:44.271 --rc geninfo_all_blocks=1 00:11:44.271 --rc geninfo_unexecuted_blocks=1 00:11:44.271 00:11:44.271 ' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.271 --rc genhtml_branch_coverage=1 00:11:44.271 --rc genhtml_function_coverage=1 00:11:44.271 --rc genhtml_legend=1 00:11:44.271 --rc geninfo_all_blocks=1 00:11:44.271 --rc geninfo_unexecuted_blocks=1 00:11:44.271 00:11:44.271 ' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.271 --rc genhtml_branch_coverage=1 00:11:44.271 --rc genhtml_function_coverage=1 00:11:44.271 --rc genhtml_legend=1 00:11:44.271 --rc geninfo_all_blocks=1 00:11:44.271 --rc geninfo_unexecuted_blocks=1 00:11:44.271 00:11:44.271 ' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.271 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.272 09:45:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.175 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.175 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.175 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.175 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.175 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.175 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.175 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:46.176 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:46.176 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:46.176 Found net devices under 0000:09:00.0: cvl_0_0 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:46.176 Found net devices under 0000:09:00.1: cvl_0_1 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.176 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.176 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.176 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.176 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.176 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.176 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.435 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.435 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.435 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:11:46.436 00:11:46.436 --- 10.0.0.2 ping statistics --- 00:11:46.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.436 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:11:46.436 00:11:46.436 --- 10.0.0.1 ping statistics --- 00:11:46.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.436 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3686311 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3686311 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3686311 ']' 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.436 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.436 [2024-11-20 09:45:23.200168] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:11:46.436 [2024-11-20 09:45:23.200263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.436 [2024-11-20 09:45:23.275967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.436 [2024-11-20 09:45:23.338196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.436 [2024-11-20 09:45:23.338252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.436 [2024-11-20 09:45:23.338281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.436 [2024-11-20 09:45:23.338293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.436 [2024-11-20 09:45:23.338309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.436 [2024-11-20 09:45:23.339913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.436 [2024-11-20 09:45:23.339956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.436 [2024-11-20 09:45:23.340016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.436 [2024-11-20 09:45:23.340019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.695 [2024-11-20 09:45:23.491245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.695 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.696 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.696 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.696 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.696 [2024-11-20 09:45:23.556438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.696 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.696 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:46.696 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:46.696 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:50.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.902 rmmod nvme_tcp 00:12:00.902 rmmod nvme_fabrics 00:12:00.902 rmmod nvme_keyring 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3686311 ']' 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3686311 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3686311 ']' 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3686311 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3686311 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3686311' 00:12:00.902 killing process with pid 3686311 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3686311 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3686311 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.902 09:45:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.839 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.839 00:12:02.839 real 0m19.002s 00:12:02.839 user 0m56.863s 00:12:02.839 sys 0m3.449s 00:12:02.839 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.839 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.839 ************************************ 00:12:02.839 END TEST nvmf_connect_disconnect 00:12:02.839 ************************************ 00:12:02.839 09:45:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.839 09:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.839 09:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.839 09:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.839 ************************************ 00:12:02.839 START TEST nvmf_multitarget 00:12:02.839 ************************************ 00:12:02.839 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.099 * Looking for test storage... 00:12:03.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.099 --rc genhtml_branch_coverage=1 00:12:03.099 --rc genhtml_function_coverage=1 00:12:03.099 --rc genhtml_legend=1 00:12:03.099 --rc geninfo_all_blocks=1 00:12:03.099 --rc geninfo_unexecuted_blocks=1 00:12:03.099 00:12:03.099 ' 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.099 --rc genhtml_branch_coverage=1 00:12:03.099 --rc genhtml_function_coverage=1 00:12:03.099 --rc genhtml_legend=1 00:12:03.099 --rc geninfo_all_blocks=1 00:12:03.099 --rc geninfo_unexecuted_blocks=1 00:12:03.099 00:12:03.099 ' 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.099 --rc genhtml_branch_coverage=1 00:12:03.099 --rc genhtml_function_coverage=1 00:12:03.099 --rc genhtml_legend=1 00:12:03.099 --rc geninfo_all_blocks=1 00:12:03.099 --rc geninfo_unexecuted_blocks=1 00:12:03.099 00:12:03.099 ' 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.099 --rc genhtml_branch_coverage=1 00:12:03.099 --rc genhtml_function_coverage=1 00:12:03.099 --rc genhtml_legend=1 00:12:03.099 --rc geninfo_all_blocks=1 00:12:03.099 --rc geninfo_unexecuted_blocks=1 00:12:03.099 00:12:03.099 ' 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.099 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.100 09:45:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:05.636 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:05.636 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:05.636 Found net devices under 0000:09:00.0: cvl_0_0 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.636 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:05.637 Found net devices under 0000:09:00.1: cvl_0_1 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:12:05.637 00:12:05.637 --- 10.0.0.2 ping statistics --- 00:12:05.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.637 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:12:05.637 00:12:05.637 --- 10.0.0.1 ping statistics --- 00:12:05.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.637 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3690074 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3690074 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3690074 ']' 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.637 [2024-11-20 09:45:42.291850] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:12:05.637 [2024-11-20 09:45:42.291930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.637 [2024-11-20 09:45:42.363102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.637 [2024-11-20 09:45:42.421649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.637 [2024-11-20 09:45:42.421704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.637 [2024-11-20 09:45:42.421733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.637 [2024-11-20 09:45:42.421744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.637 [2024-11-20 09:45:42.421754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.637 [2024-11-20 09:45:42.423239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.637 [2024-11-20 09:45:42.423314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.637 [2024-11-20 09:45:42.423367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.637 [2024-11-20 09:45:42.423370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.637 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.894 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.894 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:05.894 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:05.894 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:05.894 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:05.895 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:05.895 "nvmf_tgt_1" 00:12:05.895 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:06.152 "nvmf_tgt_2" 00:12:06.152 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.152 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:06.152 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:06.152 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:06.409 true 00:12:06.409 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:06.409 true 00:12:06.409 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.409 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.667 rmmod nvme_tcp 00:12:06.667 rmmod nvme_fabrics 00:12:06.667 rmmod nvme_keyring 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3690074 ']' 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3690074 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3690074 ']' 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3690074 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3690074 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3690074' 00:12:06.667 killing process with pid 3690074 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3690074 00:12:06.667 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3690074 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.925 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.828 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.828 00:12:08.828 real 0m5.991s 00:12:08.828 user 0m6.667s 00:12:08.828 sys 0m2.113s 00:12:08.828 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.828 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:08.828 ************************************ 00:12:08.828 END TEST nvmf_multitarget 00:12:08.828 ************************************ 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.087 ************************************ 00:12:09.087 START TEST nvmf_rpc 00:12:09.087 ************************************ 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:09.087 * Looking for test storage... 00:12:09.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.087 --rc genhtml_branch_coverage=1 00:12:09.087 --rc genhtml_function_coverage=1 00:12:09.087 --rc genhtml_legend=1 00:12:09.087 --rc geninfo_all_blocks=1 00:12:09.087 --rc geninfo_unexecuted_blocks=1 00:12:09.087 00:12:09.087 ' 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.087 --rc genhtml_branch_coverage=1 00:12:09.087 --rc genhtml_function_coverage=1 00:12:09.087 --rc genhtml_legend=1 00:12:09.087 --rc geninfo_all_blocks=1 00:12:09.087 --rc geninfo_unexecuted_blocks=1 00:12:09.087 00:12:09.087 ' 00:12:09.087 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.088 --rc genhtml_branch_coverage=1 00:12:09.088 --rc genhtml_function_coverage=1 00:12:09.088 --rc genhtml_legend=1 00:12:09.088 --rc geninfo_all_blocks=1 00:12:09.088 --rc geninfo_unexecuted_blocks=1 00:12:09.088 00:12:09.088 ' 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.088 --rc genhtml_branch_coverage=1 00:12:09.088 --rc genhtml_function_coverage=1 00:12:09.088 --rc genhtml_legend=1 00:12:09.088 --rc geninfo_all_blocks=1 00:12:09.088 --rc geninfo_unexecuted_blocks=1 00:12:09.088 00:12:09.088 ' 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:09.088 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:11.622 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:11.622 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:11.622 Found net devices under 0000:09:00.0: cvl_0_0 00:12:11.622 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:11.623 Found net devices under 0000:09:00.1: cvl_0_1 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:12:11.623 00:12:11.623 --- 10.0.0.2 ping statistics --- 00:12:11.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.623 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:12:11.623 00:12:11.623 --- 10.0.0.1 ping statistics --- 00:12:11.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.623 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3692188 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3692188 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3692188 ']' 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.623 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 [2024-11-20 09:45:48.306497] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:12:11.623 [2024-11-20 09:45:48.306589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.623 [2024-11-20 09:45:48.380909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.623 [2024-11-20 09:45:48.441342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.623 [2024-11-20 09:45:48.441396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.623 [2024-11-20 09:45:48.441424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.623 [2024-11-20 09:45:48.441436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.623 [2024-11-20 09:45:48.441445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.623 [2024-11-20 09:45:48.443012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.624 [2024-11-20 09:45:48.443092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.624 [2024-11-20 09:45:48.443041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.624 [2024-11-20 09:45:48.443095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:11.882 "tick_rate": 2700000000, 00:12:11.882 "poll_groups": [ 00:12:11.882 { 00:12:11.882 "name": "nvmf_tgt_poll_group_000", 00:12:11.882 "admin_qpairs": 0, 00:12:11.882 "io_qpairs": 0, 00:12:11.882 "current_admin_qpairs": 0, 00:12:11.882 "current_io_qpairs": 0, 00:12:11.882 "pending_bdev_io": 0, 00:12:11.882 "completed_nvme_io": 0, 00:12:11.882 "transports": [] 00:12:11.882 }, 00:12:11.882 { 00:12:11.882 "name": "nvmf_tgt_poll_group_001", 00:12:11.882 "admin_qpairs": 0, 00:12:11.882 "io_qpairs": 0, 00:12:11.882 "current_admin_qpairs": 0, 00:12:11.882 "current_io_qpairs": 0, 00:12:11.882 "pending_bdev_io": 0, 00:12:11.882 "completed_nvme_io": 0, 00:12:11.882 "transports": [] 00:12:11.882 }, 00:12:11.882 { 00:12:11.882 "name": "nvmf_tgt_poll_group_002", 00:12:11.882 "admin_qpairs": 0, 00:12:11.882 "io_qpairs": 0, 00:12:11.882 "current_admin_qpairs": 0, 00:12:11.882 "current_io_qpairs": 0, 00:12:11.882 "pending_bdev_io": 0, 00:12:11.882 "completed_nvme_io": 0, 00:12:11.882 "transports": [] 00:12:11.882 }, 00:12:11.882 { 00:12:11.882 "name": "nvmf_tgt_poll_group_003", 00:12:11.882 "admin_qpairs": 0, 00:12:11.882 "io_qpairs": 0, 00:12:11.882 "current_admin_qpairs": 0, 00:12:11.882 "current_io_qpairs": 0, 00:12:11.882 "pending_bdev_io": 0, 00:12:11.882 "completed_nvme_io": 0, 00:12:11.882 "transports": [] 00:12:11.882 } 00:12:11.882 ] 00:12:11.882 }' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.882 [2024-11-20 09:45:48.706768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:11.882 "tick_rate": 2700000000, 00:12:11.882 "poll_groups": [ 00:12:11.882 { 00:12:11.882 "name": "nvmf_tgt_poll_group_000", 00:12:11.882 "admin_qpairs": 0, 00:12:11.882 "io_qpairs": 0, 00:12:11.882 "current_admin_qpairs": 0, 00:12:11.882 "current_io_qpairs": 0, 00:12:11.882 "pending_bdev_io": 0, 00:12:11.882 "completed_nvme_io": 0, 00:12:11.882 "transports": [ 00:12:11.882 { 00:12:11.882 "trtype": "TCP" 00:12:11.882 } 00:12:11.882 ] 00:12:11.882 }, 00:12:11.882 { 00:12:11.882 "name": "nvmf_tgt_poll_group_001", 00:12:11.882 "admin_qpairs": 0, 00:12:11.882 "io_qpairs": 0, 00:12:11.882 "current_admin_qpairs": 0, 00:12:11.882 "current_io_qpairs": 0, 00:12:11.882 "pending_bdev_io": 0, 00:12:11.882 "completed_nvme_io": 0, 00:12:11.882 "transports": [ 00:12:11.882 { 00:12:11.882 "trtype": "TCP" 00:12:11.882 } 00:12:11.882 ] 00:12:11.882 }, 00:12:11.882 { 00:12:11.882 "name": "nvmf_tgt_poll_group_002", 00:12:11.882 "admin_qpairs": 0, 00:12:11.882 "io_qpairs": 0, 00:12:11.882 "current_admin_qpairs": 0, 00:12:11.882 "current_io_qpairs": 0, 00:12:11.882 "pending_bdev_io": 0, 00:12:11.882 "completed_nvme_io": 0, 00:12:11.882 "transports": [ 00:12:11.882 { 00:12:11.882 "trtype": "TCP" 00:12:11.882 } 00:12:11.882 ] 00:12:11.882 }, 00:12:11.882 { 00:12:11.882 "name": "nvmf_tgt_poll_group_003", 00:12:11.882 "admin_qpairs": 0, 00:12:11.882 "io_qpairs": 0, 00:12:11.882 "current_admin_qpairs": 0, 00:12:11.882 "current_io_qpairs": 0, 00:12:11.882 "pending_bdev_io": 0, 00:12:11.882 "completed_nvme_io": 0, 00:12:11.882 "transports": [ 00:12:11.882 { 00:12:11.882 "trtype": "TCP" 00:12:11.882 } 00:12:11.882 ] 00:12:11.882 } 00:12:11.882 ] 00:12:11.882 }' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:11.882 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:11.883 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 Malloc1 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 [2024-11-20 09:45:48.860500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:12:12.141 [2024-11-20 09:45:48.883183] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:12:12.141 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:12.141 could not add new controller: failed to write to nvme-fabrics device 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.141 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.707 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.707 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:12.707 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.707 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:12.707 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.236 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.237 [2024-11-20 09:45:51.672783] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:12:15.237 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:15.237 could not add new controller: failed to write to nvme-fabrics device 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.237 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.495 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.495 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.495 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.495 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.495 09:45:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.023 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.024 [2024-11-20 09:45:54.463029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.024 09:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.282 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.282 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:18.282 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.282 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:18.282 09:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.809 [2024-11-20 09:45:57.322096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.809 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.374 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.374 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:21.374 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.374 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:21.374 09:45:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.356 [2024-11-20 09:46:00.152938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.356 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.357 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.921 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.921 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.921 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.921 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:23.921 09:46:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.447 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 [2024-11-20 09:46:02.892408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.448 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.705 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.705 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:26.705 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.705 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:26.705 09:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:28.620 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:28.620 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:28.620 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.620 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:28.620 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.620 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:28.620 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.878 [2024-11-20 09:46:05.676908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.878 09:46:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.808 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.808 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:29.808 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.808 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:29.808 09:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.705 [2024-11-20 09:46:08.589336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.705 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 [2024-11-20 09:46:08.637391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 [2024-11-20 09:46:08.685566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.963 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 [2024-11-20 09:46:08.733721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 [2024-11-20 09:46:08.781874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:31.964 "tick_rate": 2700000000, 00:12:31.964 "poll_groups": [ 00:12:31.964 { 00:12:31.964 "name": "nvmf_tgt_poll_group_000", 00:12:31.964 "admin_qpairs": 2, 00:12:31.964 "io_qpairs": 84, 00:12:31.964 "current_admin_qpairs": 0, 00:12:31.964 "current_io_qpairs": 0, 00:12:31.964 "pending_bdev_io": 0, 00:12:31.964 "completed_nvme_io": 183, 00:12:31.964 "transports": [ 00:12:31.964 { 00:12:31.964 "trtype": "TCP" 00:12:31.964 } 00:12:31.964 ] 00:12:31.964 }, 00:12:31.964 { 00:12:31.964 "name": "nvmf_tgt_poll_group_001", 00:12:31.964 "admin_qpairs": 2, 00:12:31.964 "io_qpairs": 84, 00:12:31.964 "current_admin_qpairs": 0, 00:12:31.964 "current_io_qpairs": 0, 00:12:31.964 "pending_bdev_io": 0, 00:12:31.964 "completed_nvme_io": 144, 00:12:31.964 "transports": [ 00:12:31.964 { 00:12:31.964 "trtype": "TCP" 00:12:31.964 } 00:12:31.964 ] 00:12:31.964 }, 00:12:31.964 { 00:12:31.964 "name": "nvmf_tgt_poll_group_002", 00:12:31.964 "admin_qpairs": 1, 00:12:31.964 "io_qpairs": 84, 00:12:31.964 "current_admin_qpairs": 0, 00:12:31.964 "current_io_qpairs": 0, 00:12:31.964 "pending_bdev_io": 0, 00:12:31.964 "completed_nvme_io": 176, 00:12:31.964 "transports": [ 00:12:31.964 { 00:12:31.964 "trtype": "TCP" 00:12:31.964 } 00:12:31.964 ] 00:12:31.964 }, 00:12:31.964 { 00:12:31.964 "name": "nvmf_tgt_poll_group_003", 00:12:31.964 "admin_qpairs": 2, 00:12:31.964 "io_qpairs": 84, 00:12:31.964 "current_admin_qpairs": 0, 00:12:31.964 "current_io_qpairs": 0, 00:12:31.964 "pending_bdev_io": 0, 00:12:31.964 "completed_nvme_io": 183, 00:12:31.964 "transports": [ 00:12:31.964 { 00:12:31.964 "trtype": "TCP" 00:12:31.964 } 00:12:31.964 ] 00:12:31.964 } 00:12:31.964 ] 00:12:31.964 }' 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:31.964 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.223 rmmod nvme_tcp 00:12:32.223 rmmod nvme_fabrics 00:12:32.223 rmmod nvme_keyring 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3692188 ']' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3692188 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3692188 ']' 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3692188 00:12:32.223 09:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:32.223 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.223 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3692188 00:12:32.223 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.223 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.223 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3692188' 00:12:32.223 killing process with pid 3692188 00:12:32.223 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3692188 00:12:32.223 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3692188 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.482 09:46:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.012 00:12:35.012 real 0m25.533s 00:12:35.012 user 1m22.737s 00:12:35.012 sys 0m4.215s 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.012 ************************************ 00:12:35.012 END TEST nvmf_rpc 00:12:35.012 ************************************ 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.012 ************************************ 00:12:35.012 START TEST nvmf_invalid 00:12:35.012 ************************************ 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:35.012 * Looking for test storage... 00:12:35.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:35.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.012 --rc genhtml_branch_coverage=1 00:12:35.012 --rc genhtml_function_coverage=1 00:12:35.012 --rc genhtml_legend=1 00:12:35.012 --rc geninfo_all_blocks=1 00:12:35.012 --rc geninfo_unexecuted_blocks=1 00:12:35.012 00:12:35.012 ' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:35.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.012 --rc genhtml_branch_coverage=1 00:12:35.012 --rc genhtml_function_coverage=1 00:12:35.012 --rc genhtml_legend=1 00:12:35.012 --rc geninfo_all_blocks=1 00:12:35.012 --rc geninfo_unexecuted_blocks=1 00:12:35.012 00:12:35.012 ' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:35.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.012 --rc genhtml_branch_coverage=1 00:12:35.012 --rc genhtml_function_coverage=1 00:12:35.012 --rc genhtml_legend=1 00:12:35.012 --rc geninfo_all_blocks=1 00:12:35.012 --rc geninfo_unexecuted_blocks=1 00:12:35.012 00:12:35.012 ' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:35.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.012 --rc genhtml_branch_coverage=1 00:12:35.012 --rc genhtml_function_coverage=1 00:12:35.012 --rc genhtml_legend=1 00:12:35.012 --rc geninfo_all_blocks=1 00:12:35.012 --rc geninfo_unexecuted_blocks=1 00:12:35.012 00:12:35.012 ' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.012 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.013 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.913 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:36.914 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:36.914 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:36.914 Found net devices under 0000:09:00.0: cvl_0_0 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:36.914 Found net devices under 0000:09:00.1: cvl_0_1 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:12:36.914 00:12:36.914 --- 10.0.0.2 ping statistics --- 00:12:36.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.914 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:36.914 00:12:36.914 --- 10.0.0.1 ping statistics --- 00:12:36.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.914 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3696694 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3696694 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3696694 ']' 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.914 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.915 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.915 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:36.915 [2024-11-20 09:46:13.789182] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:12:36.915 [2024-11-20 09:46:13.789277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.172 [2024-11-20 09:46:13.863060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.172 [2024-11-20 09:46:13.918926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.172 [2024-11-20 09:46:13.918981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.172 [2024-11-20 09:46:13.919010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.172 [2024-11-20 09:46:13.919021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.172 [2024-11-20 09:46:13.919030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.172 [2024-11-20 09:46:13.920636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.172 [2024-11-20 09:46:13.920696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.172 [2024-11-20 09:46:13.920762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.172 [2024-11-20 09:46:13.920765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.172 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.172 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:37.172 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:37.172 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:37.172 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:37.172 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.172 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:37.172 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12839 00:12:37.430 [2024-11-20 09:46:14.341917] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:37.687 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:37.687 { 00:12:37.687 "nqn": "nqn.2016-06.io.spdk:cnode12839", 00:12:37.687 "tgt_name": "foobar", 00:12:37.687 "method": "nvmf_create_subsystem", 00:12:37.687 "req_id": 1 00:12:37.687 } 00:12:37.687 Got JSON-RPC error response 00:12:37.687 response: 00:12:37.687 { 00:12:37.687 "code": -32603, 00:12:37.687 "message": "Unable to find target foobar" 00:12:37.687 }' 00:12:37.687 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:37.687 { 00:12:37.687 "nqn": "nqn.2016-06.io.spdk:cnode12839", 00:12:37.687 "tgt_name": "foobar", 00:12:37.687 "method": "nvmf_create_subsystem", 00:12:37.687 "req_id": 1 00:12:37.687 } 00:12:37.687 Got JSON-RPC error response 00:12:37.687 response: 00:12:37.687 { 00:12:37.687 "code": -32603, 00:12:37.687 "message": "Unable to find target foobar" 00:12:37.687 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:37.687 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:37.687 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1609 00:12:37.945 [2024-11-20 09:46:14.610881] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1609: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:37.945 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:37.945 { 00:12:37.945 "nqn": "nqn.2016-06.io.spdk:cnode1609", 00:12:37.945 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:37.945 "method": "nvmf_create_subsystem", 00:12:37.945 "req_id": 1 00:12:37.945 } 00:12:37.945 Got JSON-RPC error response 00:12:37.945 response: 00:12:37.945 { 00:12:37.945 "code": -32602, 00:12:37.945 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:37.945 }' 00:12:37.945 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:37.945 { 00:12:37.945 "nqn": "nqn.2016-06.io.spdk:cnode1609", 00:12:37.945 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:37.945 "method": "nvmf_create_subsystem", 00:12:37.945 "req_id": 1 00:12:37.945 } 00:12:37.945 Got JSON-RPC error response 00:12:37.945 response: 00:12:37.945 { 00:12:37.945 "code": -32602, 00:12:37.945 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:37.945 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:37.945 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:37.945 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14162 00:12:38.202 [2024-11-20 09:46:14.879687] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14162: invalid model number 'SPDK_Controller' 00:12:38.202 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:38.202 { 00:12:38.202 "nqn": "nqn.2016-06.io.spdk:cnode14162", 00:12:38.202 "model_number": "SPDK_Controller\u001f", 00:12:38.202 "method": "nvmf_create_subsystem", 00:12:38.202 "req_id": 1 00:12:38.202 } 00:12:38.202 Got JSON-RPC error response 00:12:38.202 response: 00:12:38.202 { 00:12:38.203 "code": -32602, 00:12:38.203 "message": "Invalid MN SPDK_Controller\u001f" 00:12:38.203 }' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:38.203 { 00:12:38.203 "nqn": "nqn.2016-06.io.spdk:cnode14162", 00:12:38.203 "model_number": "SPDK_Controller\u001f", 00:12:38.203 "method": "nvmf_create_subsystem", 00:12:38.203 "req_id": 1 00:12:38.203 } 00:12:38.203 Got JSON-RPC error response 00:12:38.203 response: 00:12:38.203 { 00:12:38.203 "code": -32602, 00:12:38.203 "message": "Invalid MN SPDK_Controller\u001f" 00:12:38.203 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.203 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}^+QfZ"6Rhcv9F\2nuIh:' 00:12:38.204 09:46:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '}^+QfZ"6Rhcv9F\2nuIh:' nqn.2016-06.io.spdk:cnode1487 00:12:38.463 [2024-11-20 09:46:15.228883] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1487: invalid serial number '}^+QfZ"6Rhcv9F\2nuIh:' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:38.463 { 00:12:38.463 "nqn": "nqn.2016-06.io.spdk:cnode1487", 00:12:38.463 "serial_number": "}^+QfZ\"6Rhcv9F\\2nuIh:", 00:12:38.463 "method": "nvmf_create_subsystem", 00:12:38.463 "req_id": 1 00:12:38.463 } 00:12:38.463 Got JSON-RPC error response 00:12:38.463 response: 00:12:38.463 { 00:12:38.463 "code": -32602, 00:12:38.463 "message": "Invalid SN }^+QfZ\"6Rhcv9F\\2nuIh:" 00:12:38.463 }' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:38.463 { 00:12:38.463 "nqn": "nqn.2016-06.io.spdk:cnode1487", 00:12:38.463 "serial_number": "}^+QfZ\"6Rhcv9F\\2nuIh:", 00:12:38.463 "method": "nvmf_create_subsystem", 00:12:38.463 "req_id": 1 00:12:38.463 } 00:12:38.463 Got JSON-RPC error response 00:12:38.463 response: 00:12:38.463 { 00:12:38.463 "code": -32602, 00:12:38.463 "message": "Invalid SN }^+QfZ\"6Rhcv9F\\2nuIh:" 00:12:38.463 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:38.463 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:38.464 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.465 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.722 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:38.722 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:38.722 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:38.722 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.722 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.722 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:38.722 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '/Yd-0F#-d?=r&-UR%T[WAn|!x`*q>.i"8ukDF0GKa' 00:12:38.723 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '/Yd-0F#-d?=r&-UR%T[WAn|!x`*q>.i"8ukDF0GKa' nqn.2016-06.io.spdk:cnode17831 00:12:38.980 [2024-11-20 09:46:15.654275] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17831: invalid model number '/Yd-0F#-d?=r&-UR%T[WAn|!x`*q>.i"8ukDF0GKa' 00:12:38.980 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:38.980 { 00:12:38.980 "nqn": "nqn.2016-06.io.spdk:cnode17831", 00:12:38.980 "model_number": "/Yd-0F#-d?=r&-UR%T[WAn|!x`*q>.i\"8ukDF0GKa", 00:12:38.980 "method": "nvmf_create_subsystem", 00:12:38.980 "req_id": 1 00:12:38.980 } 00:12:38.980 Got JSON-RPC error response 00:12:38.980 response: 00:12:38.980 { 00:12:38.980 "code": -32602, 00:12:38.980 "message": "Invalid MN /Yd-0F#-d?=r&-UR%T[WAn|!x`*q>.i\"8ukDF0GKa" 00:12:38.980 }' 00:12:38.980 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:38.980 { 00:12:38.980 "nqn": "nqn.2016-06.io.spdk:cnode17831", 00:12:38.980 "model_number": "/Yd-0F#-d?=r&-UR%T[WAn|!x`*q>.i\"8ukDF0GKa", 00:12:38.980 "method": "nvmf_create_subsystem", 00:12:38.980 "req_id": 1 00:12:38.980 } 00:12:38.980 Got JSON-RPC error response 00:12:38.980 response: 00:12:38.980 { 00:12:38.980 "code": -32602, 00:12:38.980 "message": "Invalid MN /Yd-0F#-d?=r&-UR%T[WAn|!x`*q>.i\"8ukDF0GKa" 00:12:38.980 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:38.980 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:39.238 [2024-11-20 09:46:15.927244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.238 09:46:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:39.494 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:39.495 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:39.495 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:39.495 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:39.495 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:39.751 [2024-11-20 09:46:16.473029] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:39.751 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:39.751 { 00:12:39.751 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:39.751 "listen_address": { 00:12:39.751 "trtype": "tcp", 00:12:39.751 "traddr": "", 00:12:39.751 "trsvcid": "4421" 00:12:39.751 }, 00:12:39.751 "method": "nvmf_subsystem_remove_listener", 00:12:39.751 "req_id": 1 00:12:39.751 } 00:12:39.751 Got JSON-RPC error response 00:12:39.751 response: 00:12:39.751 { 00:12:39.751 "code": -32602, 00:12:39.751 "message": "Invalid parameters" 00:12:39.751 }' 00:12:39.751 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:39.751 { 00:12:39.751 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:39.751 "listen_address": { 00:12:39.751 "trtype": "tcp", 00:12:39.751 "traddr": "", 00:12:39.751 "trsvcid": "4421" 00:12:39.751 }, 00:12:39.751 "method": "nvmf_subsystem_remove_listener", 00:12:39.751 "req_id": 1 00:12:39.751 } 00:12:39.752 Got JSON-RPC error response 00:12:39.752 response: 00:12:39.752 { 00:12:39.752 "code": -32602, 00:12:39.752 "message": "Invalid parameters" 00:12:39.752 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:39.752 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23465 -i 0 00:12:40.037 [2024-11-20 09:46:16.745884] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23465: invalid cntlid range [0-65519] 00:12:40.037 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:40.037 { 00:12:40.037 "nqn": "nqn.2016-06.io.spdk:cnode23465", 00:12:40.037 "min_cntlid": 0, 00:12:40.037 "method": "nvmf_create_subsystem", 00:12:40.037 "req_id": 1 00:12:40.037 } 00:12:40.037 Got JSON-RPC error response 00:12:40.037 response: 00:12:40.037 { 00:12:40.037 "code": -32602, 00:12:40.037 "message": "Invalid cntlid range [0-65519]" 00:12:40.037 }' 00:12:40.037 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:40.037 { 00:12:40.037 "nqn": "nqn.2016-06.io.spdk:cnode23465", 00:12:40.037 "min_cntlid": 0, 00:12:40.037 "method": "nvmf_create_subsystem", 00:12:40.037 "req_id": 1 00:12:40.037 } 00:12:40.037 Got JSON-RPC error response 00:12:40.037 response: 00:12:40.037 { 00:12:40.037 "code": -32602, 00:12:40.037 "message": "Invalid cntlid range [0-65519]" 00:12:40.037 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:40.037 09:46:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27947 -i 65520 00:12:40.294 [2024-11-20 09:46:17.014784] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27947: invalid cntlid range [65520-65519] 00:12:40.294 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:40.294 { 00:12:40.294 "nqn": "nqn.2016-06.io.spdk:cnode27947", 00:12:40.294 "min_cntlid": 65520, 00:12:40.294 "method": "nvmf_create_subsystem", 00:12:40.294 "req_id": 1 00:12:40.294 } 00:12:40.294 Got JSON-RPC error response 00:12:40.294 response: 00:12:40.294 { 00:12:40.294 "code": -32602, 00:12:40.294 "message": "Invalid cntlid range [65520-65519]" 00:12:40.294 }' 00:12:40.294 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:40.294 { 00:12:40.294 "nqn": "nqn.2016-06.io.spdk:cnode27947", 00:12:40.294 "min_cntlid": 65520, 00:12:40.294 "method": "nvmf_create_subsystem", 00:12:40.294 "req_id": 1 00:12:40.294 } 00:12:40.294 Got JSON-RPC error response 00:12:40.294 response: 00:12:40.294 { 00:12:40.294 "code": -32602, 00:12:40.294 "message": "Invalid cntlid range [65520-65519]" 00:12:40.294 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:40.294 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9528 -I 0 00:12:40.552 [2024-11-20 09:46:17.283682] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9528: invalid cntlid range [1-0] 00:12:40.552 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:40.552 { 00:12:40.552 "nqn": "nqn.2016-06.io.spdk:cnode9528", 00:12:40.552 "max_cntlid": 0, 00:12:40.552 "method": "nvmf_create_subsystem", 00:12:40.552 "req_id": 1 00:12:40.552 } 00:12:40.552 Got JSON-RPC error response 00:12:40.552 response: 00:12:40.552 { 00:12:40.552 "code": -32602, 00:12:40.552 "message": "Invalid cntlid range [1-0]" 00:12:40.552 }' 00:12:40.552 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:40.552 { 00:12:40.552 "nqn": "nqn.2016-06.io.spdk:cnode9528", 00:12:40.552 "max_cntlid": 0, 00:12:40.552 "method": "nvmf_create_subsystem", 00:12:40.552 "req_id": 1 00:12:40.552 } 00:12:40.552 Got JSON-RPC error response 00:12:40.552 response: 00:12:40.552 { 00:12:40.552 "code": -32602, 00:12:40.552 "message": "Invalid cntlid range [1-0]" 00:12:40.552 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:40.552 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9087 -I 65520 00:12:40.809 [2024-11-20 09:46:17.564637] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9087: invalid cntlid range [1-65520] 00:12:40.809 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:40.809 { 00:12:40.809 "nqn": "nqn.2016-06.io.spdk:cnode9087", 00:12:40.809 "max_cntlid": 65520, 00:12:40.809 "method": "nvmf_create_subsystem", 00:12:40.809 "req_id": 1 00:12:40.809 } 00:12:40.809 Got JSON-RPC error response 00:12:40.809 response: 00:12:40.809 { 00:12:40.809 "code": -32602, 00:12:40.809 "message": "Invalid cntlid range [1-65520]" 00:12:40.809 }' 00:12:40.809 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:40.809 { 00:12:40.809 "nqn": "nqn.2016-06.io.spdk:cnode9087", 00:12:40.809 "max_cntlid": 65520, 00:12:40.809 "method": "nvmf_create_subsystem", 00:12:40.809 "req_id": 1 00:12:40.809 } 00:12:40.809 Got JSON-RPC error response 00:12:40.809 response: 00:12:40.809 { 00:12:40.809 "code": -32602, 00:12:40.809 "message": "Invalid cntlid range [1-65520]" 00:12:40.809 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:40.809 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12788 -i 6 -I 5 00:12:41.067 [2024-11-20 09:46:17.841537] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12788: invalid cntlid range [6-5] 00:12:41.067 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:41.067 { 00:12:41.067 "nqn": "nqn.2016-06.io.spdk:cnode12788", 00:12:41.067 "min_cntlid": 6, 00:12:41.067 "max_cntlid": 5, 00:12:41.067 "method": "nvmf_create_subsystem", 00:12:41.067 "req_id": 1 00:12:41.067 } 00:12:41.067 Got JSON-RPC error response 00:12:41.067 response: 00:12:41.067 { 00:12:41.067 "code": -32602, 00:12:41.067 "message": "Invalid cntlid range [6-5]" 00:12:41.067 }' 00:12:41.067 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:41.067 { 00:12:41.067 "nqn": "nqn.2016-06.io.spdk:cnode12788", 00:12:41.067 "min_cntlid": 6, 00:12:41.067 "max_cntlid": 5, 00:12:41.067 "method": "nvmf_create_subsystem", 00:12:41.067 "req_id": 1 00:12:41.067 } 00:12:41.067 Got JSON-RPC error response 00:12:41.067 response: 00:12:41.067 { 00:12:41.067 "code": -32602, 00:12:41.067 "message": "Invalid cntlid range [6-5]" 00:12:41.067 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.067 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:41.325 { 00:12:41.325 "name": "foobar", 00:12:41.325 "method": "nvmf_delete_target", 00:12:41.325 "req_id": 1 00:12:41.325 } 00:12:41.325 Got JSON-RPC error response 00:12:41.325 response: 00:12:41.325 { 00:12:41.325 "code": -32602, 00:12:41.325 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:41.325 }' 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:41.325 { 00:12:41.325 "name": "foobar", 00:12:41.325 "method": "nvmf_delete_target", 00:12:41.325 "req_id": 1 00:12:41.325 } 00:12:41.325 Got JSON-RPC error response 00:12:41.325 response: 00:12:41.325 { 00:12:41.325 "code": -32602, 00:12:41.325 "message": "The specified target doesn't exist, cannot delete it." 00:12:41.325 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.325 09:46:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.325 rmmod nvme_tcp 00:12:41.325 rmmod nvme_fabrics 00:12:41.325 rmmod nvme_keyring 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3696694 ']' 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3696694 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3696694 ']' 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3696694 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3696694 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3696694' 00:12:41.325 killing process with pid 3696694 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3696694 00:12:41.325 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3696694 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.583 09:46:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.491 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.491 00:12:43.491 real 0m8.984s 00:12:43.491 user 0m21.500s 00:12:43.491 sys 0m2.510s 00:12:43.491 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.491 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.491 ************************************ 00:12:43.491 END TEST nvmf_invalid 00:12:43.491 ************************************ 00:12:43.491 09:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:43.491 09:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.491 09:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.491 09:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.748 ************************************ 00:12:43.748 START TEST nvmf_connect_stress 00:12:43.748 ************************************ 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:43.748 * Looking for test storage... 00:12:43.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.748 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:43.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.749 --rc genhtml_branch_coverage=1 00:12:43.749 --rc genhtml_function_coverage=1 00:12:43.749 --rc genhtml_legend=1 00:12:43.749 --rc geninfo_all_blocks=1 00:12:43.749 --rc geninfo_unexecuted_blocks=1 00:12:43.749 00:12:43.749 ' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:43.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.749 --rc genhtml_branch_coverage=1 00:12:43.749 --rc genhtml_function_coverage=1 00:12:43.749 --rc genhtml_legend=1 00:12:43.749 --rc geninfo_all_blocks=1 00:12:43.749 --rc geninfo_unexecuted_blocks=1 00:12:43.749 00:12:43.749 ' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:43.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.749 --rc genhtml_branch_coverage=1 00:12:43.749 --rc genhtml_function_coverage=1 00:12:43.749 --rc genhtml_legend=1 00:12:43.749 --rc geninfo_all_blocks=1 00:12:43.749 --rc geninfo_unexecuted_blocks=1 00:12:43.749 00:12:43.749 ' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:43.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.749 --rc genhtml_branch_coverage=1 00:12:43.749 --rc genhtml_function_coverage=1 00:12:43.749 --rc genhtml_legend=1 00:12:43.749 --rc geninfo_all_blocks=1 00:12:43.749 --rc geninfo_unexecuted_blocks=1 00:12:43.749 00:12:43.749 ' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.749 09:46:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:46.279 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:46.279 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.279 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:46.280 Found net devices under 0000:09:00.0: cvl_0_0 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:46.280 Found net devices under 0000:09:00.1: cvl_0_1 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:12:46.280 00:12:46.280 --- 10.0.0.2 ping statistics --- 00:12:46.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.280 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:12:46.280 00:12:46.280 --- 10.0.0.1 ping statistics --- 00:12:46.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.280 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3699341 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3699341 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3699341 ']' 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.280 09:46:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 [2024-11-20 09:46:22.894781] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:12:46.280 [2024-11-20 09:46:22.894875] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.280 [2024-11-20 09:46:22.968827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:46.280 [2024-11-20 09:46:23.030089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.280 [2024-11-20 09:46:23.030138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.280 [2024-11-20 09:46:23.030167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.280 [2024-11-20 09:46:23.030180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.280 [2024-11-20 09:46:23.030190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.280 [2024-11-20 09:46:23.031696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.280 [2024-11-20 09:46:23.031758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.280 [2024-11-20 09:46:23.031761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.280 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 [2024-11-20 09:46:23.181321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.281 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.281 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:46.281 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.281 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.538 [2024-11-20 09:46:23.198624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.538 NULL1 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.538 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3699433 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.539 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.797 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.797 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:46.797 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.797 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.797 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.054 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:47.054 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.054 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.054 09:46:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.311 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.311 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:47.311 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.311 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.311 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.876 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.876 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:47.876 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.876 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.876 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.133 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:48.133 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.133 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.133 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.390 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.390 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:48.390 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.390 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.390 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.648 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.648 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:48.648 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.648 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.648 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.212 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.212 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:49.212 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.212 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.212 09:46:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.470 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.470 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:49.470 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.470 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.470 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.727 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.727 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:49.727 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.727 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.727 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.983 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.983 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:49.983 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.983 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.983 09:46:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.240 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.240 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:50.240 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.240 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.240 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.824 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.824 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:50.824 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.824 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.824 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.087 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.087 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:51.087 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.087 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.087 09:46:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.343 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.343 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:51.343 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.343 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.344 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.601 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.601 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:51.601 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.601 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.601 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.858 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.858 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:51.858 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.858 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.858 09:46:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.424 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.424 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:52.424 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.424 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.424 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.681 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.681 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:52.681 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.681 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.681 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.938 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.938 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:52.938 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.938 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.938 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.196 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.196 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:53.196 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.196 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.196 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.453 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.453 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:53.453 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.453 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.453 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.016 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.016 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:54.016 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.016 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.016 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.273 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.273 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:54.273 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.273 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.273 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.534 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.534 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:54.534 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.534 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.534 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.791 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.791 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:54.791 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.791 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.791 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.048 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.048 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:55.048 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.048 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.048 09:46:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.611 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.611 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:55.611 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.611 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.611 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.868 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.868 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:55.868 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.868 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.868 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.125 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.125 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:56.125 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.125 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.125 09:46:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.382 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.382 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:56.382 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.382 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.382 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.638 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3699433 00:12:56.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3699433) - No such process 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3699433 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:56.638 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.639 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.639 rmmod nvme_tcp 00:12:56.895 rmmod nvme_fabrics 00:12:56.895 rmmod nvme_keyring 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3699341 ']' 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3699341 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3699341 ']' 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3699341 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699341 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699341' 00:12:56.895 killing process with pid 3699341 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3699341 00:12:56.895 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3699341 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.154 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.058 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.058 00:12:59.058 real 0m15.503s 00:12:59.058 user 0m38.625s 00:12:59.058 sys 0m5.971s 00:12:59.058 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.058 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.058 ************************************ 00:12:59.058 END TEST nvmf_connect_stress 00:12:59.058 ************************************ 00:12:59.058 09:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:59.058 09:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.058 09:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.058 09:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.058 ************************************ 00:12:59.058 START TEST nvmf_fused_ordering 00:12:59.058 ************************************ 00:12:59.058 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:59.317 * Looking for test storage... 00:12:59.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.317 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:59.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.318 --rc genhtml_branch_coverage=1 00:12:59.318 --rc genhtml_function_coverage=1 00:12:59.318 --rc genhtml_legend=1 00:12:59.318 --rc geninfo_all_blocks=1 00:12:59.318 --rc geninfo_unexecuted_blocks=1 00:12:59.318 00:12:59.318 ' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:59.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.318 --rc genhtml_branch_coverage=1 00:12:59.318 --rc genhtml_function_coverage=1 00:12:59.318 --rc genhtml_legend=1 00:12:59.318 --rc geninfo_all_blocks=1 00:12:59.318 --rc geninfo_unexecuted_blocks=1 00:12:59.318 00:12:59.318 ' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:59.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.318 --rc genhtml_branch_coverage=1 00:12:59.318 --rc genhtml_function_coverage=1 00:12:59.318 --rc genhtml_legend=1 00:12:59.318 --rc geninfo_all_blocks=1 00:12:59.318 --rc geninfo_unexecuted_blocks=1 00:12:59.318 00:12:59.318 ' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:59.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.318 --rc genhtml_branch_coverage=1 00:12:59.318 --rc genhtml_function_coverage=1 00:12:59.318 --rc genhtml_legend=1 00:12:59.318 --rc geninfo_all_blocks=1 00:12:59.318 --rc geninfo_unexecuted_blocks=1 00:12:59.318 00:12:59.318 ' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.318 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:01.856 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:01.856 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:01.856 Found net devices under 0000:09:00.0: cvl_0_0 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:01.856 Found net devices under 0000:09:00.1: cvl_0_1 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.856 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:01.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:13:01.857 00:13:01.857 --- 10.0.0.2 ping statistics --- 00:13:01.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.857 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:13:01.857 00:13:01.857 --- 10.0.0.1 ping statistics --- 00:13:01.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.857 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3702642 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3702642 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3702642 ']' 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 [2024-11-20 09:46:38.429195] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:13:01.857 [2024-11-20 09:46:38.429286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.857 [2024-11-20 09:46:38.502880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.857 [2024-11-20 09:46:38.561950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.857 [2024-11-20 09:46:38.562004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.857 [2024-11-20 09:46:38.562032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.857 [2024-11-20 09:46:38.562043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.857 [2024-11-20 09:46:38.562053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.857 [2024-11-20 09:46:38.562702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 [2024-11-20 09:46:38.712162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 [2024-11-20 09:46:38.728400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 NULL1 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.858 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:02.114 [2024-11-20 09:46:38.773699] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:13:02.114 [2024-11-20 09:46:38.773733] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702666 ] 00:13:02.371 Attached to nqn.2016-06.io.spdk:cnode1 00:13:02.371 Namespace ID: 1 size: 1GB 00:13:02.371 fused_ordering(0) 00:13:02.371 fused_ordering(1) 00:13:02.371 fused_ordering(2) 00:13:02.371 fused_ordering(3) 00:13:02.371 fused_ordering(4) 00:13:02.372 fused_ordering(5) 00:13:02.372 fused_ordering(6) 00:13:02.372 fused_ordering(7) 00:13:02.372 fused_ordering(8) 00:13:02.372 fused_ordering(9) 00:13:02.372 fused_ordering(10) 00:13:02.372 fused_ordering(11) 00:13:02.372 fused_ordering(12) 00:13:02.372 fused_ordering(13) 00:13:02.372 fused_ordering(14) 00:13:02.372 fused_ordering(15) 00:13:02.372 fused_ordering(16) 00:13:02.372 fused_ordering(17) 00:13:02.372 fused_ordering(18) 00:13:02.372 fused_ordering(19) 00:13:02.372 fused_ordering(20) 00:13:02.372 fused_ordering(21) 00:13:02.372 fused_ordering(22) 00:13:02.372 fused_ordering(23) 00:13:02.372 fused_ordering(24) 00:13:02.372 fused_ordering(25) 00:13:02.372 fused_ordering(26) 00:13:02.372 fused_ordering(27) 00:13:02.372 fused_ordering(28) 00:13:02.372 fused_ordering(29) 00:13:02.372 fused_ordering(30) 00:13:02.372 fused_ordering(31) 00:13:02.372 fused_ordering(32) 00:13:02.372 fused_ordering(33) 00:13:02.372 fused_ordering(34) 00:13:02.372 fused_ordering(35) 00:13:02.372 fused_ordering(36) 00:13:02.372 fused_ordering(37) 00:13:02.372 fused_ordering(38) 00:13:02.372 fused_ordering(39) 00:13:02.372 fused_ordering(40) 00:13:02.372 fused_ordering(41) 00:13:02.372 fused_ordering(42) 00:13:02.372 fused_ordering(43) 00:13:02.372 fused_ordering(44) 00:13:02.372 fused_ordering(45) 00:13:02.372 fused_ordering(46) 00:13:02.372 fused_ordering(47) 00:13:02.372 fused_ordering(48) 00:13:02.372 fused_ordering(49) 00:13:02.372 fused_ordering(50) 00:13:02.372 fused_ordering(51) 00:13:02.372 fused_ordering(52) 00:13:02.372 fused_ordering(53) 00:13:02.372 fused_ordering(54) 00:13:02.372 fused_ordering(55) 00:13:02.372 fused_ordering(56) 00:13:02.372 fused_ordering(57) 00:13:02.372 fused_ordering(58) 00:13:02.372 fused_ordering(59) 00:13:02.372 fused_ordering(60) 00:13:02.372 fused_ordering(61) 00:13:02.372 fused_ordering(62) 00:13:02.372 fused_ordering(63) 00:13:02.372 fused_ordering(64) 00:13:02.372 fused_ordering(65) 00:13:02.372 fused_ordering(66) 00:13:02.372 fused_ordering(67) 00:13:02.372 fused_ordering(68) 00:13:02.372 fused_ordering(69) 00:13:02.372 fused_ordering(70) 00:13:02.372 fused_ordering(71) 00:13:02.372 fused_ordering(72) 00:13:02.372 fused_ordering(73) 00:13:02.372 fused_ordering(74) 00:13:02.372 fused_ordering(75) 00:13:02.372 fused_ordering(76) 00:13:02.372 fused_ordering(77) 00:13:02.372 fused_ordering(78) 00:13:02.372 fused_ordering(79) 00:13:02.372 fused_ordering(80) 00:13:02.372 fused_ordering(81) 00:13:02.372 fused_ordering(82) 00:13:02.372 fused_ordering(83) 00:13:02.372 fused_ordering(84) 00:13:02.372 fused_ordering(85) 00:13:02.372 fused_ordering(86) 00:13:02.372 fused_ordering(87) 00:13:02.372 fused_ordering(88) 00:13:02.372 fused_ordering(89) 00:13:02.372 fused_ordering(90) 00:13:02.372 fused_ordering(91) 00:13:02.372 fused_ordering(92) 00:13:02.372 fused_ordering(93) 00:13:02.372 fused_ordering(94) 00:13:02.372 fused_ordering(95) 00:13:02.372 fused_ordering(96) 00:13:02.372 fused_ordering(97) 00:13:02.372 fused_ordering(98) 00:13:02.372 fused_ordering(99) 00:13:02.372 fused_ordering(100) 00:13:02.372 fused_ordering(101) 00:13:02.372 fused_ordering(102) 00:13:02.372 fused_ordering(103) 00:13:02.372 fused_ordering(104) 00:13:02.372 fused_ordering(105) 00:13:02.372 fused_ordering(106) 00:13:02.372 fused_ordering(107) 00:13:02.372 fused_ordering(108) 00:13:02.372 fused_ordering(109) 00:13:02.372 fused_ordering(110) 00:13:02.372 fused_ordering(111) 00:13:02.372 fused_ordering(112) 00:13:02.372 fused_ordering(113) 00:13:02.372 fused_ordering(114) 00:13:02.372 fused_ordering(115) 00:13:02.372 fused_ordering(116) 00:13:02.372 fused_ordering(117) 00:13:02.372 fused_ordering(118) 00:13:02.372 fused_ordering(119) 00:13:02.372 fused_ordering(120) 00:13:02.372 fused_ordering(121) 00:13:02.372 fused_ordering(122) 00:13:02.372 fused_ordering(123) 00:13:02.372 fused_ordering(124) 00:13:02.372 fused_ordering(125) 00:13:02.372 fused_ordering(126) 00:13:02.372 fused_ordering(127) 00:13:02.372 fused_ordering(128) 00:13:02.372 fused_ordering(129) 00:13:02.372 fused_ordering(130) 00:13:02.372 fused_ordering(131) 00:13:02.372 fused_ordering(132) 00:13:02.372 fused_ordering(133) 00:13:02.372 fused_ordering(134) 00:13:02.372 fused_ordering(135) 00:13:02.372 fused_ordering(136) 00:13:02.372 fused_ordering(137) 00:13:02.372 fused_ordering(138) 00:13:02.372 fused_ordering(139) 00:13:02.372 fused_ordering(140) 00:13:02.372 fused_ordering(141) 00:13:02.372 fused_ordering(142) 00:13:02.372 fused_ordering(143) 00:13:02.372 fused_ordering(144) 00:13:02.372 fused_ordering(145) 00:13:02.372 fused_ordering(146) 00:13:02.372 fused_ordering(147) 00:13:02.372 fused_ordering(148) 00:13:02.372 fused_ordering(149) 00:13:02.372 fused_ordering(150) 00:13:02.372 fused_ordering(151) 00:13:02.372 fused_ordering(152) 00:13:02.372 fused_ordering(153) 00:13:02.372 fused_ordering(154) 00:13:02.372 fused_ordering(155) 00:13:02.372 fused_ordering(156) 00:13:02.372 fused_ordering(157) 00:13:02.372 fused_ordering(158) 00:13:02.372 fused_ordering(159) 00:13:02.372 fused_ordering(160) 00:13:02.372 fused_ordering(161) 00:13:02.372 fused_ordering(162) 00:13:02.372 fused_ordering(163) 00:13:02.372 fused_ordering(164) 00:13:02.372 fused_ordering(165) 00:13:02.372 fused_ordering(166) 00:13:02.372 fused_ordering(167) 00:13:02.372 fused_ordering(168) 00:13:02.372 fused_ordering(169) 00:13:02.372 fused_ordering(170) 00:13:02.372 fused_ordering(171) 00:13:02.372 fused_ordering(172) 00:13:02.372 fused_ordering(173) 00:13:02.372 fused_ordering(174) 00:13:02.372 fused_ordering(175) 00:13:02.372 fused_ordering(176) 00:13:02.372 fused_ordering(177) 00:13:02.372 fused_ordering(178) 00:13:02.372 fused_ordering(179) 00:13:02.372 fused_ordering(180) 00:13:02.372 fused_ordering(181) 00:13:02.372 fused_ordering(182) 00:13:02.372 fused_ordering(183) 00:13:02.372 fused_ordering(184) 00:13:02.372 fused_ordering(185) 00:13:02.372 fused_ordering(186) 00:13:02.372 fused_ordering(187) 00:13:02.372 fused_ordering(188) 00:13:02.372 fused_ordering(189) 00:13:02.372 fused_ordering(190) 00:13:02.372 fused_ordering(191) 00:13:02.372 fused_ordering(192) 00:13:02.372 fused_ordering(193) 00:13:02.372 fused_ordering(194) 00:13:02.373 fused_ordering(195) 00:13:02.373 fused_ordering(196) 00:13:02.373 fused_ordering(197) 00:13:02.373 fused_ordering(198) 00:13:02.373 fused_ordering(199) 00:13:02.373 fused_ordering(200) 00:13:02.373 fused_ordering(201) 00:13:02.373 fused_ordering(202) 00:13:02.373 fused_ordering(203) 00:13:02.373 fused_ordering(204) 00:13:02.373 fused_ordering(205) 00:13:02.630 fused_ordering(206) 00:13:02.630 fused_ordering(207) 00:13:02.630 fused_ordering(208) 00:13:02.630 fused_ordering(209) 00:13:02.630 fused_ordering(210) 00:13:02.630 fused_ordering(211) 00:13:02.630 fused_ordering(212) 00:13:02.630 fused_ordering(213) 00:13:02.630 fused_ordering(214) 00:13:02.630 fused_ordering(215) 00:13:02.630 fused_ordering(216) 00:13:02.630 fused_ordering(217) 00:13:02.630 fused_ordering(218) 00:13:02.630 fused_ordering(219) 00:13:02.630 fused_ordering(220) 00:13:02.630 fused_ordering(221) 00:13:02.630 fused_ordering(222) 00:13:02.630 fused_ordering(223) 00:13:02.630 fused_ordering(224) 00:13:02.630 fused_ordering(225) 00:13:02.630 fused_ordering(226) 00:13:02.630 fused_ordering(227) 00:13:02.630 fused_ordering(228) 00:13:02.630 fused_ordering(229) 00:13:02.630 fused_ordering(230) 00:13:02.630 fused_ordering(231) 00:13:02.630 fused_ordering(232) 00:13:02.630 fused_ordering(233) 00:13:02.630 fused_ordering(234) 00:13:02.630 fused_ordering(235) 00:13:02.630 fused_ordering(236) 00:13:02.630 fused_ordering(237) 00:13:02.630 fused_ordering(238) 00:13:02.630 fused_ordering(239) 00:13:02.630 fused_ordering(240) 00:13:02.630 fused_ordering(241) 00:13:02.630 fused_ordering(242) 00:13:02.630 fused_ordering(243) 00:13:02.630 fused_ordering(244) 00:13:02.630 fused_ordering(245) 00:13:02.630 fused_ordering(246) 00:13:02.630 fused_ordering(247) 00:13:02.630 fused_ordering(248) 00:13:02.630 fused_ordering(249) 00:13:02.630 fused_ordering(250) 00:13:02.630 fused_ordering(251) 00:13:02.630 fused_ordering(252) 00:13:02.630 fused_ordering(253) 00:13:02.630 fused_ordering(254) 00:13:02.630 fused_ordering(255) 00:13:02.630 fused_ordering(256) 00:13:02.630 fused_ordering(257) 00:13:02.630 fused_ordering(258) 00:13:02.630 fused_ordering(259) 00:13:02.630 fused_ordering(260) 00:13:02.630 fused_ordering(261) 00:13:02.630 fused_ordering(262) 00:13:02.630 fused_ordering(263) 00:13:02.630 fused_ordering(264) 00:13:02.630 fused_ordering(265) 00:13:02.630 fused_ordering(266) 00:13:02.630 fused_ordering(267) 00:13:02.630 fused_ordering(268) 00:13:02.630 fused_ordering(269) 00:13:02.630 fused_ordering(270) 00:13:02.630 fused_ordering(271) 00:13:02.630 fused_ordering(272) 00:13:02.630 fused_ordering(273) 00:13:02.630 fused_ordering(274) 00:13:02.630 fused_ordering(275) 00:13:02.630 fused_ordering(276) 00:13:02.630 fused_ordering(277) 00:13:02.630 fused_ordering(278) 00:13:02.630 fused_ordering(279) 00:13:02.630 fused_ordering(280) 00:13:02.630 fused_ordering(281) 00:13:02.630 fused_ordering(282) 00:13:02.630 fused_ordering(283) 00:13:02.630 fused_ordering(284) 00:13:02.630 fused_ordering(285) 00:13:02.630 fused_ordering(286) 00:13:02.630 fused_ordering(287) 00:13:02.630 fused_ordering(288) 00:13:02.630 fused_ordering(289) 00:13:02.630 fused_ordering(290) 00:13:02.630 fused_ordering(291) 00:13:02.630 fused_ordering(292) 00:13:02.630 fused_ordering(293) 00:13:02.630 fused_ordering(294) 00:13:02.630 fused_ordering(295) 00:13:02.630 fused_ordering(296) 00:13:02.630 fused_ordering(297) 00:13:02.630 fused_ordering(298) 00:13:02.630 fused_ordering(299) 00:13:02.630 fused_ordering(300) 00:13:02.630 fused_ordering(301) 00:13:02.630 fused_ordering(302) 00:13:02.630 fused_ordering(303) 00:13:02.630 fused_ordering(304) 00:13:02.630 fused_ordering(305) 00:13:02.630 fused_ordering(306) 00:13:02.630 fused_ordering(307) 00:13:02.630 fused_ordering(308) 00:13:02.630 fused_ordering(309) 00:13:02.630 fused_ordering(310) 00:13:02.630 fused_ordering(311) 00:13:02.630 fused_ordering(312) 00:13:02.630 fused_ordering(313) 00:13:02.630 fused_ordering(314) 00:13:02.630 fused_ordering(315) 00:13:02.630 fused_ordering(316) 00:13:02.630 fused_ordering(317) 00:13:02.630 fused_ordering(318) 00:13:02.630 fused_ordering(319) 00:13:02.630 fused_ordering(320) 00:13:02.630 fused_ordering(321) 00:13:02.630 fused_ordering(322) 00:13:02.630 fused_ordering(323) 00:13:02.630 fused_ordering(324) 00:13:02.630 fused_ordering(325) 00:13:02.630 fused_ordering(326) 00:13:02.630 fused_ordering(327) 00:13:02.630 fused_ordering(328) 00:13:02.630 fused_ordering(329) 00:13:02.630 fused_ordering(330) 00:13:02.630 fused_ordering(331) 00:13:02.630 fused_ordering(332) 00:13:02.630 fused_ordering(333) 00:13:02.630 fused_ordering(334) 00:13:02.630 fused_ordering(335) 00:13:02.630 fused_ordering(336) 00:13:02.630 fused_ordering(337) 00:13:02.630 fused_ordering(338) 00:13:02.630 fused_ordering(339) 00:13:02.630 fused_ordering(340) 00:13:02.630 fused_ordering(341) 00:13:02.630 fused_ordering(342) 00:13:02.630 fused_ordering(343) 00:13:02.630 fused_ordering(344) 00:13:02.630 fused_ordering(345) 00:13:02.630 fused_ordering(346) 00:13:02.630 fused_ordering(347) 00:13:02.630 fused_ordering(348) 00:13:02.630 fused_ordering(349) 00:13:02.630 fused_ordering(350) 00:13:02.630 fused_ordering(351) 00:13:02.630 fused_ordering(352) 00:13:02.630 fused_ordering(353) 00:13:02.630 fused_ordering(354) 00:13:02.630 fused_ordering(355) 00:13:02.630 fused_ordering(356) 00:13:02.630 fused_ordering(357) 00:13:02.630 fused_ordering(358) 00:13:02.630 fused_ordering(359) 00:13:02.630 fused_ordering(360) 00:13:02.630 fused_ordering(361) 00:13:02.630 fused_ordering(362) 00:13:02.630 fused_ordering(363) 00:13:02.630 fused_ordering(364) 00:13:02.630 fused_ordering(365) 00:13:02.630 fused_ordering(366) 00:13:02.630 fused_ordering(367) 00:13:02.630 fused_ordering(368) 00:13:02.630 fused_ordering(369) 00:13:02.630 fused_ordering(370) 00:13:02.630 fused_ordering(371) 00:13:02.630 fused_ordering(372) 00:13:02.630 fused_ordering(373) 00:13:02.630 fused_ordering(374) 00:13:02.630 fused_ordering(375) 00:13:02.630 fused_ordering(376) 00:13:02.630 fused_ordering(377) 00:13:02.630 fused_ordering(378) 00:13:02.630 fused_ordering(379) 00:13:02.630 fused_ordering(380) 00:13:02.630 fused_ordering(381) 00:13:02.630 fused_ordering(382) 00:13:02.630 fused_ordering(383) 00:13:02.630 fused_ordering(384) 00:13:02.630 fused_ordering(385) 00:13:02.630 fused_ordering(386) 00:13:02.630 fused_ordering(387) 00:13:02.630 fused_ordering(388) 00:13:02.630 fused_ordering(389) 00:13:02.630 fused_ordering(390) 00:13:02.630 fused_ordering(391) 00:13:02.630 fused_ordering(392) 00:13:02.630 fused_ordering(393) 00:13:02.630 fused_ordering(394) 00:13:02.630 fused_ordering(395) 00:13:02.630 fused_ordering(396) 00:13:02.630 fused_ordering(397) 00:13:02.630 fused_ordering(398) 00:13:02.630 fused_ordering(399) 00:13:02.630 fused_ordering(400) 00:13:02.630 fused_ordering(401) 00:13:02.630 fused_ordering(402) 00:13:02.630 fused_ordering(403) 00:13:02.630 fused_ordering(404) 00:13:02.630 fused_ordering(405) 00:13:02.630 fused_ordering(406) 00:13:02.630 fused_ordering(407) 00:13:02.630 fused_ordering(408) 00:13:02.630 fused_ordering(409) 00:13:02.630 fused_ordering(410) 00:13:03.195 fused_ordering(411) 00:13:03.195 fused_ordering(412) 00:13:03.195 fused_ordering(413) 00:13:03.195 fused_ordering(414) 00:13:03.195 fused_ordering(415) 00:13:03.195 fused_ordering(416) 00:13:03.195 fused_ordering(417) 00:13:03.195 fused_ordering(418) 00:13:03.195 fused_ordering(419) 00:13:03.195 fused_ordering(420) 00:13:03.195 fused_ordering(421) 00:13:03.195 fused_ordering(422) 00:13:03.195 fused_ordering(423) 00:13:03.195 fused_ordering(424) 00:13:03.195 fused_ordering(425) 00:13:03.195 fused_ordering(426) 00:13:03.195 fused_ordering(427) 00:13:03.195 fused_ordering(428) 00:13:03.195 fused_ordering(429) 00:13:03.195 fused_ordering(430) 00:13:03.195 fused_ordering(431) 00:13:03.195 fused_ordering(432) 00:13:03.195 fused_ordering(433) 00:13:03.195 fused_ordering(434) 00:13:03.195 fused_ordering(435) 00:13:03.195 fused_ordering(436) 00:13:03.195 fused_ordering(437) 00:13:03.195 fused_ordering(438) 00:13:03.195 fused_ordering(439) 00:13:03.195 fused_ordering(440) 00:13:03.195 fused_ordering(441) 00:13:03.195 fused_ordering(442) 00:13:03.195 fused_ordering(443) 00:13:03.195 fused_ordering(444) 00:13:03.195 fused_ordering(445) 00:13:03.195 fused_ordering(446) 00:13:03.195 fused_ordering(447) 00:13:03.195 fused_ordering(448) 00:13:03.195 fused_ordering(449) 00:13:03.195 fused_ordering(450) 00:13:03.195 fused_ordering(451) 00:13:03.195 fused_ordering(452) 00:13:03.195 fused_ordering(453) 00:13:03.195 fused_ordering(454) 00:13:03.195 fused_ordering(455) 00:13:03.195 fused_ordering(456) 00:13:03.195 fused_ordering(457) 00:13:03.195 fused_ordering(458) 00:13:03.195 fused_ordering(459) 00:13:03.195 fused_ordering(460) 00:13:03.195 fused_ordering(461) 00:13:03.195 fused_ordering(462) 00:13:03.195 fused_ordering(463) 00:13:03.195 fused_ordering(464) 00:13:03.195 fused_ordering(465) 00:13:03.195 fused_ordering(466) 00:13:03.195 fused_ordering(467) 00:13:03.195 fused_ordering(468) 00:13:03.195 fused_ordering(469) 00:13:03.195 fused_ordering(470) 00:13:03.195 fused_ordering(471) 00:13:03.195 fused_ordering(472) 00:13:03.195 fused_ordering(473) 00:13:03.195 fused_ordering(474) 00:13:03.195 fused_ordering(475) 00:13:03.195 fused_ordering(476) 00:13:03.195 fused_ordering(477) 00:13:03.195 fused_ordering(478) 00:13:03.195 fused_ordering(479) 00:13:03.195 fused_ordering(480) 00:13:03.195 fused_ordering(481) 00:13:03.195 fused_ordering(482) 00:13:03.195 fused_ordering(483) 00:13:03.195 fused_ordering(484) 00:13:03.195 fused_ordering(485) 00:13:03.195 fused_ordering(486) 00:13:03.195 fused_ordering(487) 00:13:03.195 fused_ordering(488) 00:13:03.195 fused_ordering(489) 00:13:03.195 fused_ordering(490) 00:13:03.195 fused_ordering(491) 00:13:03.195 fused_ordering(492) 00:13:03.195 fused_ordering(493) 00:13:03.195 fused_ordering(494) 00:13:03.195 fused_ordering(495) 00:13:03.195 fused_ordering(496) 00:13:03.195 fused_ordering(497) 00:13:03.195 fused_ordering(498) 00:13:03.195 fused_ordering(499) 00:13:03.195 fused_ordering(500) 00:13:03.195 fused_ordering(501) 00:13:03.195 fused_ordering(502) 00:13:03.195 fused_ordering(503) 00:13:03.195 fused_ordering(504) 00:13:03.195 fused_ordering(505) 00:13:03.195 fused_ordering(506) 00:13:03.195 fused_ordering(507) 00:13:03.195 fused_ordering(508) 00:13:03.195 fused_ordering(509) 00:13:03.195 fused_ordering(510) 00:13:03.195 fused_ordering(511) 00:13:03.195 fused_ordering(512) 00:13:03.195 fused_ordering(513) 00:13:03.195 fused_ordering(514) 00:13:03.195 fused_ordering(515) 00:13:03.195 fused_ordering(516) 00:13:03.195 fused_ordering(517) 00:13:03.195 fused_ordering(518) 00:13:03.195 fused_ordering(519) 00:13:03.195 fused_ordering(520) 00:13:03.195 fused_ordering(521) 00:13:03.195 fused_ordering(522) 00:13:03.195 fused_ordering(523) 00:13:03.195 fused_ordering(524) 00:13:03.195 fused_ordering(525) 00:13:03.195 fused_ordering(526) 00:13:03.195 fused_ordering(527) 00:13:03.195 fused_ordering(528) 00:13:03.195 fused_ordering(529) 00:13:03.195 fused_ordering(530) 00:13:03.195 fused_ordering(531) 00:13:03.195 fused_ordering(532) 00:13:03.195 fused_ordering(533) 00:13:03.195 fused_ordering(534) 00:13:03.195 fused_ordering(535) 00:13:03.195 fused_ordering(536) 00:13:03.195 fused_ordering(537) 00:13:03.195 fused_ordering(538) 00:13:03.195 fused_ordering(539) 00:13:03.195 fused_ordering(540) 00:13:03.195 fused_ordering(541) 00:13:03.195 fused_ordering(542) 00:13:03.195 fused_ordering(543) 00:13:03.195 fused_ordering(544) 00:13:03.195 fused_ordering(545) 00:13:03.195 fused_ordering(546) 00:13:03.195 fused_ordering(547) 00:13:03.195 fused_ordering(548) 00:13:03.195 fused_ordering(549) 00:13:03.195 fused_ordering(550) 00:13:03.195 fused_ordering(551) 00:13:03.195 fused_ordering(552) 00:13:03.195 fused_ordering(553) 00:13:03.195 fused_ordering(554) 00:13:03.195 fused_ordering(555) 00:13:03.195 fused_ordering(556) 00:13:03.195 fused_ordering(557) 00:13:03.195 fused_ordering(558) 00:13:03.195 fused_ordering(559) 00:13:03.195 fused_ordering(560) 00:13:03.195 fused_ordering(561) 00:13:03.195 fused_ordering(562) 00:13:03.195 fused_ordering(563) 00:13:03.195 fused_ordering(564) 00:13:03.195 fused_ordering(565) 00:13:03.195 fused_ordering(566) 00:13:03.195 fused_ordering(567) 00:13:03.195 fused_ordering(568) 00:13:03.195 fused_ordering(569) 00:13:03.195 fused_ordering(570) 00:13:03.195 fused_ordering(571) 00:13:03.195 fused_ordering(572) 00:13:03.195 fused_ordering(573) 00:13:03.195 fused_ordering(574) 00:13:03.195 fused_ordering(575) 00:13:03.195 fused_ordering(576) 00:13:03.195 fused_ordering(577) 00:13:03.195 fused_ordering(578) 00:13:03.195 fused_ordering(579) 00:13:03.195 fused_ordering(580) 00:13:03.195 fused_ordering(581) 00:13:03.195 fused_ordering(582) 00:13:03.195 fused_ordering(583) 00:13:03.195 fused_ordering(584) 00:13:03.195 fused_ordering(585) 00:13:03.195 fused_ordering(586) 00:13:03.195 fused_ordering(587) 00:13:03.195 fused_ordering(588) 00:13:03.195 fused_ordering(589) 00:13:03.195 fused_ordering(590) 00:13:03.195 fused_ordering(591) 00:13:03.195 fused_ordering(592) 00:13:03.195 fused_ordering(593) 00:13:03.195 fused_ordering(594) 00:13:03.195 fused_ordering(595) 00:13:03.195 fused_ordering(596) 00:13:03.195 fused_ordering(597) 00:13:03.195 fused_ordering(598) 00:13:03.195 fused_ordering(599) 00:13:03.195 fused_ordering(600) 00:13:03.195 fused_ordering(601) 00:13:03.195 fused_ordering(602) 00:13:03.195 fused_ordering(603) 00:13:03.195 fused_ordering(604) 00:13:03.195 fused_ordering(605) 00:13:03.195 fused_ordering(606) 00:13:03.195 fused_ordering(607) 00:13:03.195 fused_ordering(608) 00:13:03.195 fused_ordering(609) 00:13:03.195 fused_ordering(610) 00:13:03.195 fused_ordering(611) 00:13:03.195 fused_ordering(612) 00:13:03.195 fused_ordering(613) 00:13:03.195 fused_ordering(614) 00:13:03.195 fused_ordering(615) 00:13:03.760 fused_ordering(616) 00:13:03.760 fused_ordering(617) 00:13:03.760 fused_ordering(618) 00:13:03.760 fused_ordering(619) 00:13:03.760 fused_ordering(620) 00:13:03.760 fused_ordering(621) 00:13:03.760 fused_ordering(622) 00:13:03.760 fused_ordering(623) 00:13:03.760 fused_ordering(624) 00:13:03.760 fused_ordering(625) 00:13:03.760 fused_ordering(626) 00:13:03.760 fused_ordering(627) 00:13:03.760 fused_ordering(628) 00:13:03.760 fused_ordering(629) 00:13:03.760 fused_ordering(630) 00:13:03.760 fused_ordering(631) 00:13:03.760 fused_ordering(632) 00:13:03.760 fused_ordering(633) 00:13:03.760 fused_ordering(634) 00:13:03.760 fused_ordering(635) 00:13:03.760 fused_ordering(636) 00:13:03.760 fused_ordering(637) 00:13:03.760 fused_ordering(638) 00:13:03.760 fused_ordering(639) 00:13:03.760 fused_ordering(640) 00:13:03.760 fused_ordering(641) 00:13:03.760 fused_ordering(642) 00:13:03.760 fused_ordering(643) 00:13:03.760 fused_ordering(644) 00:13:03.760 fused_ordering(645) 00:13:03.760 fused_ordering(646) 00:13:03.760 fused_ordering(647) 00:13:03.760 fused_ordering(648) 00:13:03.760 fused_ordering(649) 00:13:03.760 fused_ordering(650) 00:13:03.760 fused_ordering(651) 00:13:03.760 fused_ordering(652) 00:13:03.760 fused_ordering(653) 00:13:03.760 fused_ordering(654) 00:13:03.760 fused_ordering(655) 00:13:03.760 fused_ordering(656) 00:13:03.760 fused_ordering(657) 00:13:03.760 fused_ordering(658) 00:13:03.760 fused_ordering(659) 00:13:03.760 fused_ordering(660) 00:13:03.760 fused_ordering(661) 00:13:03.760 fused_ordering(662) 00:13:03.760 fused_ordering(663) 00:13:03.760 fused_ordering(664) 00:13:03.760 fused_ordering(665) 00:13:03.760 fused_ordering(666) 00:13:03.760 fused_ordering(667) 00:13:03.760 fused_ordering(668) 00:13:03.760 fused_ordering(669) 00:13:03.760 fused_ordering(670) 00:13:03.760 fused_ordering(671) 00:13:03.760 fused_ordering(672) 00:13:03.760 fused_ordering(673) 00:13:03.760 fused_ordering(674) 00:13:03.760 fused_ordering(675) 00:13:03.760 fused_ordering(676) 00:13:03.760 fused_ordering(677) 00:13:03.760 fused_ordering(678) 00:13:03.760 fused_ordering(679) 00:13:03.760 fused_ordering(680) 00:13:03.760 fused_ordering(681) 00:13:03.760 fused_ordering(682) 00:13:03.760 fused_ordering(683) 00:13:03.760 fused_ordering(684) 00:13:03.760 fused_ordering(685) 00:13:03.760 fused_ordering(686) 00:13:03.760 fused_ordering(687) 00:13:03.760 fused_ordering(688) 00:13:03.760 fused_ordering(689) 00:13:03.760 fused_ordering(690) 00:13:03.760 fused_ordering(691) 00:13:03.760 fused_ordering(692) 00:13:03.760 fused_ordering(693) 00:13:03.760 fused_ordering(694) 00:13:03.760 fused_ordering(695) 00:13:03.760 fused_ordering(696) 00:13:03.760 fused_ordering(697) 00:13:03.760 fused_ordering(698) 00:13:03.760 fused_ordering(699) 00:13:03.760 fused_ordering(700) 00:13:03.760 fused_ordering(701) 00:13:03.760 fused_ordering(702) 00:13:03.760 fused_ordering(703) 00:13:03.760 fused_ordering(704) 00:13:03.760 fused_ordering(705) 00:13:03.760 fused_ordering(706) 00:13:03.760 fused_ordering(707) 00:13:03.760 fused_ordering(708) 00:13:03.760 fused_ordering(709) 00:13:03.760 fused_ordering(710) 00:13:03.760 fused_ordering(711) 00:13:03.760 fused_ordering(712) 00:13:03.760 fused_ordering(713) 00:13:03.760 fused_ordering(714) 00:13:03.760 fused_ordering(715) 00:13:03.760 fused_ordering(716) 00:13:03.760 fused_ordering(717) 00:13:03.760 fused_ordering(718) 00:13:03.760 fused_ordering(719) 00:13:03.760 fused_ordering(720) 00:13:03.760 fused_ordering(721) 00:13:03.760 fused_ordering(722) 00:13:03.760 fused_ordering(723) 00:13:03.760 fused_ordering(724) 00:13:03.760 fused_ordering(725) 00:13:03.760 fused_ordering(726) 00:13:03.760 fused_ordering(727) 00:13:03.760 fused_ordering(728) 00:13:03.760 fused_ordering(729) 00:13:03.760 fused_ordering(730) 00:13:03.760 fused_ordering(731) 00:13:03.760 fused_ordering(732) 00:13:03.760 fused_ordering(733) 00:13:03.761 fused_ordering(734) 00:13:03.761 fused_ordering(735) 00:13:03.761 fused_ordering(736) 00:13:03.761 fused_ordering(737) 00:13:03.761 fused_ordering(738) 00:13:03.761 fused_ordering(739) 00:13:03.761 fused_ordering(740) 00:13:03.761 fused_ordering(741) 00:13:03.761 fused_ordering(742) 00:13:03.761 fused_ordering(743) 00:13:03.761 fused_ordering(744) 00:13:03.761 fused_ordering(745) 00:13:03.761 fused_ordering(746) 00:13:03.761 fused_ordering(747) 00:13:03.761 fused_ordering(748) 00:13:03.761 fused_ordering(749) 00:13:03.761 fused_ordering(750) 00:13:03.761 fused_ordering(751) 00:13:03.761 fused_ordering(752) 00:13:03.761 fused_ordering(753) 00:13:03.761 fused_ordering(754) 00:13:03.761 fused_ordering(755) 00:13:03.761 fused_ordering(756) 00:13:03.761 fused_ordering(757) 00:13:03.761 fused_ordering(758) 00:13:03.761 fused_ordering(759) 00:13:03.761 fused_ordering(760) 00:13:03.761 fused_ordering(761) 00:13:03.761 fused_ordering(762) 00:13:03.761 fused_ordering(763) 00:13:03.761 fused_ordering(764) 00:13:03.761 fused_ordering(765) 00:13:03.761 fused_ordering(766) 00:13:03.761 fused_ordering(767) 00:13:03.761 fused_ordering(768) 00:13:03.761 fused_ordering(769) 00:13:03.761 fused_ordering(770) 00:13:03.761 fused_ordering(771) 00:13:03.761 fused_ordering(772) 00:13:03.761 fused_ordering(773) 00:13:03.761 fused_ordering(774) 00:13:03.761 fused_ordering(775) 00:13:03.761 fused_ordering(776) 00:13:03.761 fused_ordering(777) 00:13:03.761 fused_ordering(778) 00:13:03.761 fused_ordering(779) 00:13:03.761 fused_ordering(780) 00:13:03.761 fused_ordering(781) 00:13:03.761 fused_ordering(782) 00:13:03.761 fused_ordering(783) 00:13:03.761 fused_ordering(784) 00:13:03.761 fused_ordering(785) 00:13:03.761 fused_ordering(786) 00:13:03.761 fused_ordering(787) 00:13:03.761 fused_ordering(788) 00:13:03.761 fused_ordering(789) 00:13:03.761 fused_ordering(790) 00:13:03.761 fused_ordering(791) 00:13:03.761 fused_ordering(792) 00:13:03.761 fused_ordering(793) 00:13:03.761 fused_ordering(794) 00:13:03.761 fused_ordering(795) 00:13:03.761 fused_ordering(796) 00:13:03.761 fused_ordering(797) 00:13:03.761 fused_ordering(798) 00:13:03.761 fused_ordering(799) 00:13:03.761 fused_ordering(800) 00:13:03.761 fused_ordering(801) 00:13:03.761 fused_ordering(802) 00:13:03.761 fused_ordering(803) 00:13:03.761 fused_ordering(804) 00:13:03.761 fused_ordering(805) 00:13:03.761 fused_ordering(806) 00:13:03.761 fused_ordering(807) 00:13:03.761 fused_ordering(808) 00:13:03.761 fused_ordering(809) 00:13:03.761 fused_ordering(810) 00:13:03.761 fused_ordering(811) 00:13:03.761 fused_ordering(812) 00:13:03.761 fused_ordering(813) 00:13:03.761 fused_ordering(814) 00:13:03.761 fused_ordering(815) 00:13:03.761 fused_ordering(816) 00:13:03.761 fused_ordering(817) 00:13:03.761 fused_ordering(818) 00:13:03.761 fused_ordering(819) 00:13:03.761 fused_ordering(820) 00:13:04.325 fused_ordering(821) 00:13:04.325 fused_ordering(822) 00:13:04.325 fused_ordering(823) 00:13:04.325 fused_ordering(824) 00:13:04.325 fused_ordering(825) 00:13:04.325 fused_ordering(826) 00:13:04.325 fused_ordering(827) 00:13:04.325 fused_ordering(828) 00:13:04.325 fused_ordering(829) 00:13:04.325 fused_ordering(830) 00:13:04.325 fused_ordering(831) 00:13:04.325 fused_ordering(832) 00:13:04.325 fused_ordering(833) 00:13:04.325 fused_ordering(834) 00:13:04.325 fused_ordering(835) 00:13:04.325 fused_ordering(836) 00:13:04.325 fused_ordering(837) 00:13:04.325 fused_ordering(838) 00:13:04.325 fused_ordering(839) 00:13:04.325 fused_ordering(840) 00:13:04.325 fused_ordering(841) 00:13:04.325 fused_ordering(842) 00:13:04.325 fused_ordering(843) 00:13:04.325 fused_ordering(844) 00:13:04.325 fused_ordering(845) 00:13:04.325 fused_ordering(846) 00:13:04.325 fused_ordering(847) 00:13:04.325 fused_ordering(848) 00:13:04.325 fused_ordering(849) 00:13:04.325 fused_ordering(850) 00:13:04.325 fused_ordering(851) 00:13:04.325 fused_ordering(852) 00:13:04.325 fused_ordering(853) 00:13:04.325 fused_ordering(854) 00:13:04.325 fused_ordering(855) 00:13:04.325 fused_ordering(856) 00:13:04.325 fused_ordering(857) 00:13:04.325 fused_ordering(858) 00:13:04.325 fused_ordering(859) 00:13:04.325 fused_ordering(860) 00:13:04.325 fused_ordering(861) 00:13:04.325 fused_ordering(862) 00:13:04.325 fused_ordering(863) 00:13:04.325 fused_ordering(864) 00:13:04.325 fused_ordering(865) 00:13:04.325 fused_ordering(866) 00:13:04.325 fused_ordering(867) 00:13:04.325 fused_ordering(868) 00:13:04.325 fused_ordering(869) 00:13:04.325 fused_ordering(870) 00:13:04.325 fused_ordering(871) 00:13:04.325 fused_ordering(872) 00:13:04.325 fused_ordering(873) 00:13:04.325 fused_ordering(874) 00:13:04.325 fused_ordering(875) 00:13:04.325 fused_ordering(876) 00:13:04.325 fused_ordering(877) 00:13:04.325 fused_ordering(878) 00:13:04.325 fused_ordering(879) 00:13:04.325 fused_ordering(880) 00:13:04.325 fused_ordering(881) 00:13:04.325 fused_ordering(882) 00:13:04.325 fused_ordering(883) 00:13:04.325 fused_ordering(884) 00:13:04.325 fused_ordering(885) 00:13:04.325 fused_ordering(886) 00:13:04.325 fused_ordering(887) 00:13:04.325 fused_ordering(888) 00:13:04.325 fused_ordering(889) 00:13:04.325 fused_ordering(890) 00:13:04.325 fused_ordering(891) 00:13:04.325 fused_ordering(892) 00:13:04.326 fused_ordering(893) 00:13:04.326 fused_ordering(894) 00:13:04.326 fused_ordering(895) 00:13:04.326 fused_ordering(896) 00:13:04.326 fused_ordering(897) 00:13:04.326 fused_ordering(898) 00:13:04.326 fused_ordering(899) 00:13:04.326 fused_ordering(900) 00:13:04.326 fused_ordering(901) 00:13:04.326 fused_ordering(902) 00:13:04.326 fused_ordering(903) 00:13:04.326 fused_ordering(904) 00:13:04.326 fused_ordering(905) 00:13:04.326 fused_ordering(906) 00:13:04.326 fused_ordering(907) 00:13:04.326 fused_ordering(908) 00:13:04.326 fused_ordering(909) 00:13:04.326 fused_ordering(910) 00:13:04.326 fused_ordering(911) 00:13:04.326 fused_ordering(912) 00:13:04.326 fused_ordering(913) 00:13:04.326 fused_ordering(914) 00:13:04.326 fused_ordering(915) 00:13:04.326 fused_ordering(916) 00:13:04.326 fused_ordering(917) 00:13:04.326 fused_ordering(918) 00:13:04.326 fused_ordering(919) 00:13:04.326 fused_ordering(920) 00:13:04.326 fused_ordering(921) 00:13:04.326 fused_ordering(922) 00:13:04.326 fused_ordering(923) 00:13:04.326 fused_ordering(924) 00:13:04.326 fused_ordering(925) 00:13:04.326 fused_ordering(926) 00:13:04.326 fused_ordering(927) 00:13:04.326 fused_ordering(928) 00:13:04.326 fused_ordering(929) 00:13:04.326 fused_ordering(930) 00:13:04.326 fused_ordering(931) 00:13:04.326 fused_ordering(932) 00:13:04.326 fused_ordering(933) 00:13:04.326 fused_ordering(934) 00:13:04.326 fused_ordering(935) 00:13:04.326 fused_ordering(936) 00:13:04.326 fused_ordering(937) 00:13:04.326 fused_ordering(938) 00:13:04.326 fused_ordering(939) 00:13:04.326 fused_ordering(940) 00:13:04.326 fused_ordering(941) 00:13:04.326 fused_ordering(942) 00:13:04.326 fused_ordering(943) 00:13:04.326 fused_ordering(944) 00:13:04.326 fused_ordering(945) 00:13:04.326 fused_ordering(946) 00:13:04.326 fused_ordering(947) 00:13:04.326 fused_ordering(948) 00:13:04.326 fused_ordering(949) 00:13:04.326 fused_ordering(950) 00:13:04.326 fused_ordering(951) 00:13:04.326 fused_ordering(952) 00:13:04.326 fused_ordering(953) 00:13:04.326 fused_ordering(954) 00:13:04.326 fused_ordering(955) 00:13:04.326 fused_ordering(956) 00:13:04.326 fused_ordering(957) 00:13:04.326 fused_ordering(958) 00:13:04.326 fused_ordering(959) 00:13:04.326 fused_ordering(960) 00:13:04.326 fused_ordering(961) 00:13:04.326 fused_ordering(962) 00:13:04.326 fused_ordering(963) 00:13:04.326 fused_ordering(964) 00:13:04.326 fused_ordering(965) 00:13:04.326 fused_ordering(966) 00:13:04.326 fused_ordering(967) 00:13:04.326 fused_ordering(968) 00:13:04.326 fused_ordering(969) 00:13:04.326 fused_ordering(970) 00:13:04.326 fused_ordering(971) 00:13:04.326 fused_ordering(972) 00:13:04.326 fused_ordering(973) 00:13:04.326 fused_ordering(974) 00:13:04.326 fused_ordering(975) 00:13:04.326 fused_ordering(976) 00:13:04.326 fused_ordering(977) 00:13:04.326 fused_ordering(978) 00:13:04.326 fused_ordering(979) 00:13:04.326 fused_ordering(980) 00:13:04.326 fused_ordering(981) 00:13:04.326 fused_ordering(982) 00:13:04.326 fused_ordering(983) 00:13:04.326 fused_ordering(984) 00:13:04.326 fused_ordering(985) 00:13:04.326 fused_ordering(986) 00:13:04.326 fused_ordering(987) 00:13:04.326 fused_ordering(988) 00:13:04.326 fused_ordering(989) 00:13:04.326 fused_ordering(990) 00:13:04.326 fused_ordering(991) 00:13:04.326 fused_ordering(992) 00:13:04.326 fused_ordering(993) 00:13:04.326 fused_ordering(994) 00:13:04.326 fused_ordering(995) 00:13:04.326 fused_ordering(996) 00:13:04.326 fused_ordering(997) 00:13:04.326 fused_ordering(998) 00:13:04.326 fused_ordering(999) 00:13:04.326 fused_ordering(1000) 00:13:04.326 fused_ordering(1001) 00:13:04.326 fused_ordering(1002) 00:13:04.326 fused_ordering(1003) 00:13:04.326 fused_ordering(1004) 00:13:04.326 fused_ordering(1005) 00:13:04.326 fused_ordering(1006) 00:13:04.326 fused_ordering(1007) 00:13:04.326 fused_ordering(1008) 00:13:04.326 fused_ordering(1009) 00:13:04.326 fused_ordering(1010) 00:13:04.326 fused_ordering(1011) 00:13:04.326 fused_ordering(1012) 00:13:04.326 fused_ordering(1013) 00:13:04.326 fused_ordering(1014) 00:13:04.326 fused_ordering(1015) 00:13:04.326 fused_ordering(1016) 00:13:04.326 fused_ordering(1017) 00:13:04.326 fused_ordering(1018) 00:13:04.326 fused_ordering(1019) 00:13:04.326 fused_ordering(1020) 00:13:04.326 fused_ordering(1021) 00:13:04.326 fused_ordering(1022) 00:13:04.326 fused_ordering(1023) 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.326 rmmod nvme_tcp 00:13:04.326 rmmod nvme_fabrics 00:13:04.326 rmmod nvme_keyring 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3702642 ']' 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3702642 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3702642 ']' 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3702642 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3702642 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3702642' 00:13:04.326 killing process with pid 3702642 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3702642 00:13:04.326 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3702642 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.584 09:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.116 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.116 00:13:07.116 real 0m7.450s 00:13:07.116 user 0m4.999s 00:13:07.116 sys 0m3.096s 00:13:07.116 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.116 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:07.116 ************************************ 00:13:07.116 END TEST nvmf_fused_ordering 00:13:07.116 ************************************ 00:13:07.116 09:46:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:07.116 09:46:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.116 09:46:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.116 09:46:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.116 ************************************ 00:13:07.116 START TEST nvmf_ns_masking 00:13:07.116 ************************************ 00:13:07.116 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:07.116 * Looking for test storage... 00:13:07.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.117 --rc genhtml_branch_coverage=1 00:13:07.117 --rc genhtml_function_coverage=1 00:13:07.117 --rc genhtml_legend=1 00:13:07.117 --rc geninfo_all_blocks=1 00:13:07.117 --rc geninfo_unexecuted_blocks=1 00:13:07.117 00:13:07.117 ' 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.117 --rc genhtml_branch_coverage=1 00:13:07.117 --rc genhtml_function_coverage=1 00:13:07.117 --rc genhtml_legend=1 00:13:07.117 --rc geninfo_all_blocks=1 00:13:07.117 --rc geninfo_unexecuted_blocks=1 00:13:07.117 00:13:07.117 ' 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.117 --rc genhtml_branch_coverage=1 00:13:07.117 --rc genhtml_function_coverage=1 00:13:07.117 --rc genhtml_legend=1 00:13:07.117 --rc geninfo_all_blocks=1 00:13:07.117 --rc geninfo_unexecuted_blocks=1 00:13:07.117 00:13:07.117 ' 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.117 --rc genhtml_branch_coverage=1 00:13:07.117 --rc genhtml_function_coverage=1 00:13:07.117 --rc genhtml_legend=1 00:13:07.117 --rc geninfo_all_blocks=1 00:13:07.117 --rc geninfo_unexecuted_blocks=1 00:13:07.117 00:13:07.117 ' 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.117 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3c170bfd-d97a-4340-9811-4285929416e0 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d5d0abde-ff8d-47e0-b1f2-d3dca441919c 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2d9e71b5-59a7-465d-a2ae-d9a2d8395201 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.118 09:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:09.018 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:09.019 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:09.019 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:09.019 Found net devices under 0000:09:00.0: cvl_0_0 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:09.019 Found net devices under 0000:09:00.1: cvl_0_1 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.019 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.277 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.277 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.277 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:09.277 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.277 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:09.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:13:09.277 00:13:09.277 --- 10.0.0.2 ping statistics --- 00:13:09.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.277 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:13:09.277 00:13:09.277 --- 10.0.0.1 ping statistics --- 00:13:09.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.277 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3704982 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3704982 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3704982 ']' 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.277 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.277 [2024-11-20 09:46:46.101968] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:13:09.277 [2024-11-20 09:46:46.102053] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.277 [2024-11-20 09:46:46.178739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.535 [2024-11-20 09:46:46.239513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.535 [2024-11-20 09:46:46.239568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.535 [2024-11-20 09:46:46.239597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.535 [2024-11-20 09:46:46.239608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.535 [2024-11-20 09:46:46.239618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.535 [2024-11-20 09:46:46.240262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.535 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.535 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:09.535 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.535 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.535 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.535 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.535 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:09.793 [2024-11-20 09:46:46.653734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.793 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:09.793 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:09.793 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:10.052 Malloc1 00:13:10.309 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:10.567 Malloc2 00:13:10.567 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.824 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:11.082 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.339 [2024-11-20 09:46:48.057150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.339 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:11.339 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2d9e71b5-59a7-465d-a2ae-d9a2d8395201 -a 10.0.0.2 -s 4420 -i 4 00:13:11.597 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.597 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.597 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.597 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:11.597 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:13.493 [ 0]:0x1 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:13.493 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.751 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06184343e8d241c3b5210028bedec601 00:13:13.751 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06184343e8d241c3b5210028bedec601 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.751 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:14.009 [ 0]:0x1 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06184343e8d241c3b5210028bedec601 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06184343e8d241c3b5210028bedec601 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:14.009 [ 1]:0x2 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df5db4c7307b48f3a0e15f4889a60f1a 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df5db4c7307b48f3a0e15f4889a60f1a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:14.009 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.266 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.524 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:14.782 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:14.782 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2d9e71b5-59a7-465d-a2ae-d9a2d8395201 -a 10.0.0.2 -s 4420 -i 4 00:13:15.040 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:15.040 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:15.040 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.040 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:15.040 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:15.040 09:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.936 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.194 [ 0]:0x2 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df5db4c7307b48f3a0e15f4889a60f1a 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df5db4c7307b48f3a0e15f4889a60f1a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.194 09:46:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.452 [ 0]:0x1 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06184343e8d241c3b5210028bedec601 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06184343e8d241c3b5210028bedec601 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.452 [ 1]:0x2 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df5db4c7307b48f3a0e15f4889a60f1a 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df5db4c7307b48f3a0e15f4889a60f1a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.452 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:18.019 [ 0]:0x2 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df5db4c7307b48f3a0e15f4889a60f1a 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df5db4c7307b48f3a0e15f4889a60f1a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.019 09:46:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:18.310 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:18.310 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2d9e71b5-59a7-465d-a2ae-d9a2d8395201 -a 10.0.0.2 -s 4420 -i 4 00:13:18.310 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:18.310 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:18.310 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.310 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:18.310 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:18.310 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.865 [ 0]:0x1 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=06184343e8d241c3b5210028bedec601 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 06184343e8d241c3b5210028bedec601 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.865 [ 1]:0x2 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df5db4c7307b48f3a0e15f4889a60f1a 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df5db4c7307b48f3a0e15f4889a60f1a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.865 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:21.123 [ 0]:0x2 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.123 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df5db4c7307b48f3a0e15f4889a60f1a 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df5db4c7307b48f3a0e15f4889a60f1a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:21.124 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:21.382 [2024-11-20 09:46:58.090943] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:21.382 request: 00:13:21.382 { 00:13:21.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.382 "nsid": 2, 00:13:21.382 "host": "nqn.2016-06.io.spdk:host1", 00:13:21.382 "method": "nvmf_ns_remove_host", 00:13:21.382 "req_id": 1 00:13:21.382 } 00:13:21.382 Got JSON-RPC error response 00:13:21.382 response: 00:13:21.382 { 00:13:21.382 "code": -32602, 00:13:21.382 "message": "Invalid parameters" 00:13:21.382 } 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:21.382 [ 0]:0x2 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=df5db4c7307b48f3a0e15f4889a60f1a 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ df5db4c7307b48f3a0e15f4889a60f1a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3706499 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3706499 /var/tmp/host.sock 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3706499 ']' 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:21.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.382 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:21.640 [2024-11-20 09:46:58.316460] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:13:21.640 [2024-11-20 09:46:58.316543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706499 ] 00:13:21.640 [2024-11-20 09:46:58.382955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.640 [2024-11-20 09:46:58.440773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.898 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.898 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:21.898 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.155 09:46:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.414 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3c170bfd-d97a-4340-9811-4285929416e0 00:13:22.414 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:22.414 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3C170BFDD97A434098114285929416E0 -i 00:13:22.672 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d5d0abde-ff8d-47e0-b1f2-d3dca441919c 00:13:22.672 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:22.672 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D5D0ABDEFF8D47E0B1F2D3DCA441919C -i 00:13:22.928 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:23.185 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:23.442 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:23.443 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:24.007 nvme0n1 00:13:24.007 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:24.007 09:47:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:24.265 nvme1n2 00:13:24.522 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:24.522 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:24.523 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:24.523 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:24.523 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:24.780 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:24.780 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:24.780 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:24.780 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:25.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3c170bfd-d97a-4340-9811-4285929416e0 == \3\c\1\7\0\b\f\d\-\d\9\7\a\-\4\3\4\0\-\9\8\1\1\-\4\2\8\5\9\2\9\4\1\6\e\0 ]] 00:13:25.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:25.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:25.037 09:47:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:25.294 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d5d0abde-ff8d-47e0-b1f2-d3dca441919c == \d\5\d\0\a\b\d\e\-\f\f\8\d\-\4\7\e\0\-\b\1\f\2\-\d\3\d\c\a\4\4\1\9\1\9\c ]] 00:13:25.294 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.551 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3c170bfd-d97a-4340-9811-4285929416e0 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3C170BFDD97A434098114285929416E0 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3C170BFDD97A434098114285929416E0 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:25.809 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3C170BFDD97A434098114285929416E0 00:13:26.067 [2024-11-20 09:47:02.788743] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:26.067 [2024-11-20 09:47:02.788788] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:26.067 [2024-11-20 09:47:02.788818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.067 request: 00:13:26.067 { 00:13:26.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.067 "namespace": { 00:13:26.067 "bdev_name": "invalid", 00:13:26.067 "nsid": 1, 00:13:26.067 "nguid": "3C170BFDD97A434098114285929416E0", 00:13:26.067 "no_auto_visible": false 00:13:26.067 }, 00:13:26.067 "method": "nvmf_subsystem_add_ns", 00:13:26.067 "req_id": 1 00:13:26.067 } 00:13:26.067 Got JSON-RPC error response 00:13:26.067 response: 00:13:26.067 { 00:13:26.067 "code": -32602, 00:13:26.067 "message": "Invalid parameters" 00:13:26.067 } 00:13:26.067 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:26.067 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:26.067 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:26.067 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:26.067 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3c170bfd-d97a-4340-9811-4285929416e0 00:13:26.067 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:26.067 09:47:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3C170BFDD97A434098114285929416E0 -i 00:13:26.325 09:47:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:28.223 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:28.223 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:28.224 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:28.481 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:28.481 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3706499 00:13:28.481 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3706499 ']' 00:13:28.481 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3706499 00:13:28.481 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:28.481 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.481 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3706499 00:13:28.738 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:28.738 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:28.738 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3706499' 00:13:28.738 killing process with pid 3706499 00:13:28.738 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3706499 00:13:28.738 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3706499 00:13:28.995 09:47:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:29.252 rmmod nvme_tcp 00:13:29.252 rmmod nvme_fabrics 00:13:29.252 rmmod nvme_keyring 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3704982 ']' 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3704982 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3704982 ']' 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3704982 00:13:29.252 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:29.253 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.253 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704982 00:13:29.510 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.510 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.510 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704982' 00:13:29.510 killing process with pid 3704982 00:13:29.510 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3704982 00:13:29.510 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3704982 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.768 09:47:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.674 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:31.674 00:13:31.674 real 0m25.039s 00:13:31.674 user 0m35.974s 00:13:31.674 sys 0m4.788s 00:13:31.674 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.674 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:31.674 ************************************ 00:13:31.674 END TEST nvmf_ns_masking 00:13:31.674 ************************************ 00:13:31.674 09:47:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:31.674 09:47:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:31.674 09:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.674 09:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.674 09:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.674 ************************************ 00:13:31.674 START TEST nvmf_nvme_cli 00:13:31.674 ************************************ 00:13:31.675 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:31.935 * Looking for test storage... 00:13:31.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:31.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.935 --rc genhtml_branch_coverage=1 00:13:31.935 --rc genhtml_function_coverage=1 00:13:31.935 --rc genhtml_legend=1 00:13:31.935 --rc geninfo_all_blocks=1 00:13:31.935 --rc geninfo_unexecuted_blocks=1 00:13:31.935 00:13:31.935 ' 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:31.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.935 --rc genhtml_branch_coverage=1 00:13:31.935 --rc genhtml_function_coverage=1 00:13:31.935 --rc genhtml_legend=1 00:13:31.935 --rc geninfo_all_blocks=1 00:13:31.935 --rc geninfo_unexecuted_blocks=1 00:13:31.935 00:13:31.935 ' 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:31.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.935 --rc genhtml_branch_coverage=1 00:13:31.935 --rc genhtml_function_coverage=1 00:13:31.935 --rc genhtml_legend=1 00:13:31.935 --rc geninfo_all_blocks=1 00:13:31.935 --rc geninfo_unexecuted_blocks=1 00:13:31.935 00:13:31.935 ' 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:31.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.935 --rc genhtml_branch_coverage=1 00:13:31.935 --rc genhtml_function_coverage=1 00:13:31.935 --rc genhtml_legend=1 00:13:31.935 --rc geninfo_all_blocks=1 00:13:31.935 --rc geninfo_unexecuted_blocks=1 00:13:31.935 00:13:31.935 ' 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.935 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:31.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:31.936 09:47:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:34.470 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:34.470 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.470 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:34.471 Found net devices under 0000:09:00.0: cvl_0_0 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:34.471 Found net devices under 0000:09:00.1: cvl_0_1 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:34.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:13:34.471 00:13:34.471 --- 10.0.0.2 ping statistics --- 00:13:34.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.471 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:13:34.471 09:47:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:34.471 00:13:34.471 --- 10.0.0.1 ping statistics --- 00:13:34.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.471 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3709458 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3709458 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3709458 ']' 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 [2024-11-20 09:47:11.078324] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:13:34.471 [2024-11-20 09:47:11.078416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.471 [2024-11-20 09:47:11.150667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.471 [2024-11-20 09:47:11.209523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.471 [2024-11-20 09:47:11.209572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.471 [2024-11-20 09:47:11.209604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.471 [2024-11-20 09:47:11.209614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.471 [2024-11-20 09:47:11.209624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.471 [2024-11-20 09:47:11.211272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.471 [2024-11-20 09:47:11.211369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.471 [2024-11-20 09:47:11.211344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.471 [2024-11-20 09:47:11.211373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:34.471 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.472 [2024-11-20 09:47:11.360440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.472 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.730 Malloc0 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.730 Malloc1 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.730 [2024-11-20 09:47:11.466916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.730 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:13:34.988 00:13:34.988 Discovery Log Number of Records 2, Generation counter 2 00:13:34.988 =====Discovery Log Entry 0====== 00:13:34.988 trtype: tcp 00:13:34.988 adrfam: ipv4 00:13:34.988 subtype: current discovery subsystem 00:13:34.988 treq: not required 00:13:34.988 portid: 0 00:13:34.988 trsvcid: 4420 00:13:34.988 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:34.988 traddr: 10.0.0.2 00:13:34.988 eflags: explicit discovery connections, duplicate discovery information 00:13:34.988 sectype: none 00:13:34.988 =====Discovery Log Entry 1====== 00:13:34.988 trtype: tcp 00:13:34.988 adrfam: ipv4 00:13:34.988 subtype: nvme subsystem 00:13:34.988 treq: not required 00:13:34.988 portid: 0 00:13:34.988 trsvcid: 4420 00:13:34.988 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:34.988 traddr: 10.0.0.2 00:13:34.988 eflags: none 00:13:34.988 sectype: none 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:34.988 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.554 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:35.554 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:35.554 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.554 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:35.554 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:35.554 09:47:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.081 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:38.082 /dev/nvme0n2 ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:38.082 rmmod nvme_tcp 00:13:38.082 rmmod nvme_fabrics 00:13:38.082 rmmod nvme_keyring 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3709458 ']' 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3709458 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3709458 ']' 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3709458 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3709458 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3709458' 00:13:38.082 killing process with pid 3709458 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3709458 00:13:38.082 09:47:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3709458 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.339 09:47:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.249 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:40.249 00:13:40.249 real 0m8.586s 00:13:40.249 user 0m16.152s 00:13:40.249 sys 0m2.317s 00:13:40.249 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.249 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.249 ************************************ 00:13:40.249 END TEST nvmf_nvme_cli 00:13:40.249 ************************************ 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.508 ************************************ 00:13:40.508 START TEST nvmf_vfio_user 00:13:40.508 ************************************ 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:40.508 * Looking for test storage... 00:13:40.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:40.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.508 --rc genhtml_branch_coverage=1 00:13:40.508 --rc genhtml_function_coverage=1 00:13:40.508 --rc genhtml_legend=1 00:13:40.508 --rc geninfo_all_blocks=1 00:13:40.508 --rc geninfo_unexecuted_blocks=1 00:13:40.508 00:13:40.508 ' 00:13:40.508 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:40.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.508 --rc genhtml_branch_coverage=1 00:13:40.508 --rc genhtml_function_coverage=1 00:13:40.508 --rc genhtml_legend=1 00:13:40.509 --rc geninfo_all_blocks=1 00:13:40.509 --rc geninfo_unexecuted_blocks=1 00:13:40.509 00:13:40.509 ' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.509 --rc genhtml_branch_coverage=1 00:13:40.509 --rc genhtml_function_coverage=1 00:13:40.509 --rc genhtml_legend=1 00:13:40.509 --rc geninfo_all_blocks=1 00:13:40.509 --rc geninfo_unexecuted_blocks=1 00:13:40.509 00:13:40.509 ' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:40.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.509 --rc genhtml_branch_coverage=1 00:13:40.509 --rc genhtml_function_coverage=1 00:13:40.509 --rc genhtml_legend=1 00:13:40.509 --rc geninfo_all_blocks=1 00:13:40.509 --rc geninfo_unexecuted_blocks=1 00:13:40.509 00:13:40.509 ' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:40.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3710347 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3710347' 00:13:40.509 Process pid: 3710347 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3710347 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3710347 ']' 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.509 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:40.766 [2024-11-20 09:47:17.432177] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:13:40.766 [2024-11-20 09:47:17.432264] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.766 [2024-11-20 09:47:17.500435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.766 [2024-11-20 09:47:17.560770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.766 [2024-11-20 09:47:17.560837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.766 [2024-11-20 09:47:17.560850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.766 [2024-11-20 09:47:17.560861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.766 [2024-11-20 09:47:17.560870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.766 [2024-11-20 09:47:17.562501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.766 [2024-11-20 09:47:17.562562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.766 [2024-11-20 09:47:17.562626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.766 [2024-11-20 09:47:17.562630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.023 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.023 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:41.023 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:41.956 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:42.214 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:42.214 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:42.214 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.214 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:42.214 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:42.473 Malloc1 00:13:42.473 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:43.038 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:43.038 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:43.296 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:43.296 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:43.553 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:43.811 Malloc2 00:13:43.811 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:44.069 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:44.326 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:44.586 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:44.586 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:44.586 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:44.586 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:44.586 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:44.586 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:44.586 [2024-11-20 09:47:21.351223] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:13:44.586 [2024-11-20 09:47:21.351266] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710890 ] 00:13:44.586 [2024-11-20 09:47:21.399088] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:44.586 [2024-11-20 09:47:21.411810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.586 [2024-11-20 09:47:21.411844] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcb6ad0c000 00:13:44.586 [2024-11-20 09:47:21.412802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.586 [2024-11-20 09:47:21.413795] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.586 [2024-11-20 09:47:21.414803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.586 [2024-11-20 09:47:21.415809] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.586 [2024-11-20 09:47:21.416813] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.586 [2024-11-20 09:47:21.417820] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.586 [2024-11-20 09:47:21.418822] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.586 [2024-11-20 09:47:21.419829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.586 [2024-11-20 09:47:21.420845] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.586 [2024-11-20 09:47:21.420866] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcb6ad01000 00:13:44.586 [2024-11-20 09:47:21.422024] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.586 [2024-11-20 09:47:21.437703] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:44.586 [2024-11-20 09:47:21.437751] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:44.586 [2024-11-20 09:47:21.439937] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:44.586 [2024-11-20 09:47:21.439996] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:44.586 [2024-11-20 09:47:21.440090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:44.586 [2024-11-20 09:47:21.440139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:44.586 [2024-11-20 09:47:21.440150] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:44.586 [2024-11-20 09:47:21.440937] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:44.586 [2024-11-20 09:47:21.440959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:44.586 [2024-11-20 09:47:21.440972] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:44.586 [2024-11-20 09:47:21.441943] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:44.586 [2024-11-20 09:47:21.441965] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:44.586 [2024-11-20 09:47:21.441980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.586 [2024-11-20 09:47:21.442944] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:44.586 [2024-11-20 09:47:21.442963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.586 [2024-11-20 09:47:21.443951] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:44.586 [2024-11-20 09:47:21.443971] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:44.586 [2024-11-20 09:47:21.443980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:44.586 [2024-11-20 09:47:21.443992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.586 [2024-11-20 09:47:21.444116] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:44.586 [2024-11-20 09:47:21.444126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.586 [2024-11-20 09:47:21.444135] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:44.586 [2024-11-20 09:47:21.444965] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:44.586 [2024-11-20 09:47:21.445966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:44.586 [2024-11-20 09:47:21.446968] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:44.586 [2024-11-20 09:47:21.447963] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.586 [2024-11-20 09:47:21.448074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.586 [2024-11-20 09:47:21.448985] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:44.586 [2024-11-20 09:47:21.449004] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.586 [2024-11-20 09:47:21.449012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:44.586 [2024-11-20 09:47:21.449036] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:44.586 [2024-11-20 09:47:21.449055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.586 [2024-11-20 09:47:21.449088] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.586 [2024-11-20 09:47:21.449098] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.586 [2024-11-20 09:47:21.449105] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.586 [2024-11-20 09:47:21.449127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.586 [2024-11-20 09:47:21.449180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:44.586 [2024-11-20 09:47:21.449200] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:44.586 [2024-11-20 09:47:21.449209] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:44.586 [2024-11-20 09:47:21.449216] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:44.586 [2024-11-20 09:47:21.449223] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:44.586 [2024-11-20 09:47:21.449251] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:44.586 [2024-11-20 09:47:21.449260] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:44.586 [2024-11-20 09:47:21.449268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:44.586 [2024-11-20 09:47:21.449286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.449347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.449366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.587 [2024-11-20 09:47:21.449379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.587 [2024-11-20 09:47:21.449392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.587 [2024-11-20 09:47:21.449404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.587 [2024-11-20 09:47:21.449413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.449451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.449467] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:44.587 [2024-11-20 09:47:21.449477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.449525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.449595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449642] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:44.587 [2024-11-20 09:47:21.449650] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:44.587 [2024-11-20 09:47:21.449656] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.587 [2024-11-20 09:47:21.449666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.449680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.449698] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:44.587 [2024-11-20 09:47:21.449714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449746] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.587 [2024-11-20 09:47:21.449753] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.587 [2024-11-20 09:47:21.449759] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.587 [2024-11-20 09:47:21.449768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.449798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.449822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449849] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.587 [2024-11-20 09:47:21.449857] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.587 [2024-11-20 09:47:21.449863] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.587 [2024-11-20 09:47:21.449871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.449885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.449900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449962] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:44.587 [2024-11-20 09:47:21.449969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:44.587 [2024-11-20 09:47:21.449978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:44.587 [2024-11-20 09:47:21.450005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.450023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.450042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.450058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.450074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.450086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.450101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.450112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.450149] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:44.587 [2024-11-20 09:47:21.450160] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:44.587 [2024-11-20 09:47:21.450166] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:44.587 [2024-11-20 09:47:21.450172] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:44.587 [2024-11-20 09:47:21.450177] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:44.587 [2024-11-20 09:47:21.450187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:44.587 [2024-11-20 09:47:21.450199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:44.587 [2024-11-20 09:47:21.450207] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:44.587 [2024-11-20 09:47:21.450213] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.587 [2024-11-20 09:47:21.450222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.450233] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:44.587 [2024-11-20 09:47:21.450241] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.587 [2024-11-20 09:47:21.450247] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.587 [2024-11-20 09:47:21.450255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.450267] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:44.587 [2024-11-20 09:47:21.450275] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:44.587 [2024-11-20 09:47:21.450281] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.587 [2024-11-20 09:47:21.450289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:44.587 [2024-11-20 09:47:21.450301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.450352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.450372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:44.587 [2024-11-20 09:47:21.450385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:44.587 ===================================================== 00:13:44.587 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:44.587 ===================================================== 00:13:44.587 Controller Capabilities/Features 00:13:44.587 ================================ 00:13:44.587 Vendor ID: 4e58 00:13:44.587 Subsystem Vendor ID: 4e58 00:13:44.587 Serial Number: SPDK1 00:13:44.587 Model Number: SPDK bdev Controller 00:13:44.587 Firmware Version: 25.01 00:13:44.587 Recommended Arb Burst: 6 00:13:44.588 IEEE OUI Identifier: 8d 6b 50 00:13:44.588 Multi-path I/O 00:13:44.588 May have multiple subsystem ports: Yes 00:13:44.588 May have multiple controllers: Yes 00:13:44.588 Associated with SR-IOV VF: No 00:13:44.588 Max Data Transfer Size: 131072 00:13:44.588 Max Number of Namespaces: 32 00:13:44.588 Max Number of I/O Queues: 127 00:13:44.588 NVMe Specification Version (VS): 1.3 00:13:44.588 NVMe Specification Version (Identify): 1.3 00:13:44.588 Maximum Queue Entries: 256 00:13:44.588 Contiguous Queues Required: Yes 00:13:44.588 Arbitration Mechanisms Supported 00:13:44.588 Weighted Round Robin: Not Supported 00:13:44.588 Vendor Specific: Not Supported 00:13:44.588 Reset Timeout: 15000 ms 00:13:44.588 Doorbell Stride: 4 bytes 00:13:44.588 NVM Subsystem Reset: Not Supported 00:13:44.588 Command Sets Supported 00:13:44.588 NVM Command Set: Supported 00:13:44.588 Boot Partition: Not Supported 00:13:44.588 Memory Page Size Minimum: 4096 bytes 00:13:44.588 Memory Page Size Maximum: 4096 bytes 00:13:44.588 Persistent Memory Region: Not Supported 00:13:44.588 Optional Asynchronous Events Supported 00:13:44.588 Namespace Attribute Notices: Supported 00:13:44.588 Firmware Activation Notices: Not Supported 00:13:44.588 ANA Change Notices: Not Supported 00:13:44.588 PLE Aggregate Log Change Notices: Not Supported 00:13:44.588 LBA Status Info Alert Notices: Not Supported 00:13:44.588 EGE Aggregate Log Change Notices: Not Supported 00:13:44.588 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.588 Zone Descriptor Change Notices: Not Supported 00:13:44.588 Discovery Log Change Notices: Not Supported 00:13:44.588 Controller Attributes 00:13:44.588 128-bit Host Identifier: Supported 00:13:44.588 Non-Operational Permissive Mode: Not Supported 00:13:44.588 NVM Sets: Not Supported 00:13:44.588 Read Recovery Levels: Not Supported 00:13:44.588 Endurance Groups: Not Supported 00:13:44.588 Predictable Latency Mode: Not Supported 00:13:44.588 Traffic Based Keep ALive: Not Supported 00:13:44.588 Namespace Granularity: Not Supported 00:13:44.588 SQ Associations: Not Supported 00:13:44.588 UUID List: Not Supported 00:13:44.588 Multi-Domain Subsystem: Not Supported 00:13:44.588 Fixed Capacity Management: Not Supported 00:13:44.588 Variable Capacity Management: Not Supported 00:13:44.588 Delete Endurance Group: Not Supported 00:13:44.588 Delete NVM Set: Not Supported 00:13:44.588 Extended LBA Formats Supported: Not Supported 00:13:44.588 Flexible Data Placement Supported: Not Supported 00:13:44.588 00:13:44.588 Controller Memory Buffer Support 00:13:44.588 ================================ 00:13:44.588 Supported: No 00:13:44.588 00:13:44.588 Persistent Memory Region Support 00:13:44.588 ================================ 00:13:44.588 Supported: No 00:13:44.588 00:13:44.588 Admin Command Set Attributes 00:13:44.588 ============================ 00:13:44.588 Security Send/Receive: Not Supported 00:13:44.588 Format NVM: Not Supported 00:13:44.588 Firmware Activate/Download: Not Supported 00:13:44.588 Namespace Management: Not Supported 00:13:44.588 Device Self-Test: Not Supported 00:13:44.588 Directives: Not Supported 00:13:44.588 NVMe-MI: Not Supported 00:13:44.588 Virtualization Management: Not Supported 00:13:44.588 Doorbell Buffer Config: Not Supported 00:13:44.588 Get LBA Status Capability: Not Supported 00:13:44.588 Command & Feature Lockdown Capability: Not Supported 00:13:44.588 Abort Command Limit: 4 00:13:44.588 Async Event Request Limit: 4 00:13:44.588 Number of Firmware Slots: N/A 00:13:44.588 Firmware Slot 1 Read-Only: N/A 00:13:44.588 Firmware Activation Without Reset: N/A 00:13:44.588 Multiple Update Detection Support: N/A 00:13:44.588 Firmware Update Granularity: No Information Provided 00:13:44.588 Per-Namespace SMART Log: No 00:13:44.588 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.588 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:44.588 Command Effects Log Page: Supported 00:13:44.588 Get Log Page Extended Data: Supported 00:13:44.588 Telemetry Log Pages: Not Supported 00:13:44.588 Persistent Event Log Pages: Not Supported 00:13:44.588 Supported Log Pages Log Page: May Support 00:13:44.588 Commands Supported & Effects Log Page: Not Supported 00:13:44.588 Feature Identifiers & Effects Log Page:May Support 00:13:44.588 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.588 Data Area 4 for Telemetry Log: Not Supported 00:13:44.588 Error Log Page Entries Supported: 128 00:13:44.588 Keep Alive: Supported 00:13:44.588 Keep Alive Granularity: 10000 ms 00:13:44.588 00:13:44.588 NVM Command Set Attributes 00:13:44.588 ========================== 00:13:44.588 Submission Queue Entry Size 00:13:44.588 Max: 64 00:13:44.588 Min: 64 00:13:44.588 Completion Queue Entry Size 00:13:44.588 Max: 16 00:13:44.588 Min: 16 00:13:44.588 Number of Namespaces: 32 00:13:44.588 Compare Command: Supported 00:13:44.588 Write Uncorrectable Command: Not Supported 00:13:44.588 Dataset Management Command: Supported 00:13:44.588 Write Zeroes Command: Supported 00:13:44.588 Set Features Save Field: Not Supported 00:13:44.588 Reservations: Not Supported 00:13:44.588 Timestamp: Not Supported 00:13:44.588 Copy: Supported 00:13:44.588 Volatile Write Cache: Present 00:13:44.588 Atomic Write Unit (Normal): 1 00:13:44.588 Atomic Write Unit (PFail): 1 00:13:44.588 Atomic Compare & Write Unit: 1 00:13:44.588 Fused Compare & Write: Supported 00:13:44.588 Scatter-Gather List 00:13:44.588 SGL Command Set: Supported (Dword aligned) 00:13:44.588 SGL Keyed: Not Supported 00:13:44.588 SGL Bit Bucket Descriptor: Not Supported 00:13:44.588 SGL Metadata Pointer: Not Supported 00:13:44.588 Oversized SGL: Not Supported 00:13:44.588 SGL Metadata Address: Not Supported 00:13:44.588 SGL Offset: Not Supported 00:13:44.588 Transport SGL Data Block: Not Supported 00:13:44.588 Replay Protected Memory Block: Not Supported 00:13:44.588 00:13:44.588 Firmware Slot Information 00:13:44.588 ========================= 00:13:44.588 Active slot: 1 00:13:44.588 Slot 1 Firmware Revision: 25.01 00:13:44.588 00:13:44.588 00:13:44.588 Commands Supported and Effects 00:13:44.588 ============================== 00:13:44.588 Admin Commands 00:13:44.588 -------------- 00:13:44.588 Get Log Page (02h): Supported 00:13:44.588 Identify (06h): Supported 00:13:44.588 Abort (08h): Supported 00:13:44.588 Set Features (09h): Supported 00:13:44.588 Get Features (0Ah): Supported 00:13:44.588 Asynchronous Event Request (0Ch): Supported 00:13:44.588 Keep Alive (18h): Supported 00:13:44.588 I/O Commands 00:13:44.588 ------------ 00:13:44.588 Flush (00h): Supported LBA-Change 00:13:44.588 Write (01h): Supported LBA-Change 00:13:44.588 Read (02h): Supported 00:13:44.588 Compare (05h): Supported 00:13:44.588 Write Zeroes (08h): Supported LBA-Change 00:13:44.588 Dataset Management (09h): Supported LBA-Change 00:13:44.588 Copy (19h): Supported LBA-Change 00:13:44.588 00:13:44.588 Error Log 00:13:44.588 ========= 00:13:44.588 00:13:44.588 Arbitration 00:13:44.588 =========== 00:13:44.588 Arbitration Burst: 1 00:13:44.588 00:13:44.588 Power Management 00:13:44.588 ================ 00:13:44.588 Number of Power States: 1 00:13:44.588 Current Power State: Power State #0 00:13:44.588 Power State #0: 00:13:44.588 Max Power: 0.00 W 00:13:44.588 Non-Operational State: Operational 00:13:44.588 Entry Latency: Not Reported 00:13:44.588 Exit Latency: Not Reported 00:13:44.588 Relative Read Throughput: 0 00:13:44.588 Relative Read Latency: 0 00:13:44.588 Relative Write Throughput: 0 00:13:44.588 Relative Write Latency: 0 00:13:44.588 Idle Power: Not Reported 00:13:44.588 Active Power: Not Reported 00:13:44.588 Non-Operational Permissive Mode: Not Supported 00:13:44.588 00:13:44.588 Health Information 00:13:44.588 ================== 00:13:44.588 Critical Warnings: 00:13:44.588 Available Spare Space: OK 00:13:44.588 Temperature: OK 00:13:44.588 Device Reliability: OK 00:13:44.588 Read Only: No 00:13:44.588 Volatile Memory Backup: OK 00:13:44.588 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:44.588 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:44.588 Available Spare: 0% 00:13:44.588 Available Sp[2024-11-20 09:47:21.450512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:44.588 [2024-11-20 09:47:21.450529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:44.588 [2024-11-20 09:47:21.450591] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:44.588 [2024-11-20 09:47:21.450611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.589 [2024-11-20 09:47:21.450622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.589 [2024-11-20 09:47:21.450632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.589 [2024-11-20 09:47:21.450657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.589 [2024-11-20 09:47:21.453314] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:44.589 [2024-11-20 09:47:21.453339] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:44.589 [2024-11-20 09:47:21.453999] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.589 [2024-11-20 09:47:21.454092] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:44.589 [2024-11-20 09:47:21.454106] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:44.589 [2024-11-20 09:47:21.455011] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:44.589 [2024-11-20 09:47:21.455034] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:44.589 [2024-11-20 09:47:21.455091] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:44.589 [2024-11-20 09:47:21.458326] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.847 are Threshold: 0% 00:13:44.847 Life Percentage Used: 0% 00:13:44.847 Data Units Read: 0 00:13:44.847 Data Units Written: 0 00:13:44.847 Host Read Commands: 0 00:13:44.847 Host Write Commands: 0 00:13:44.847 Controller Busy Time: 0 minutes 00:13:44.847 Power Cycles: 0 00:13:44.847 Power On Hours: 0 hours 00:13:44.847 Unsafe Shutdowns: 0 00:13:44.847 Unrecoverable Media Errors: 0 00:13:44.847 Lifetime Error Log Entries: 0 00:13:44.847 Warning Temperature Time: 0 minutes 00:13:44.847 Critical Temperature Time: 0 minutes 00:13:44.847 00:13:44.847 Number of Queues 00:13:44.847 ================ 00:13:44.847 Number of I/O Submission Queues: 127 00:13:44.847 Number of I/O Completion Queues: 127 00:13:44.847 00:13:44.847 Active Namespaces 00:13:44.847 ================= 00:13:44.847 Namespace ID:1 00:13:44.847 Error Recovery Timeout: Unlimited 00:13:44.847 Command Set Identifier: NVM (00h) 00:13:44.847 Deallocate: Supported 00:13:44.847 Deallocated/Unwritten Error: Not Supported 00:13:44.847 Deallocated Read Value: Unknown 00:13:44.847 Deallocate in Write Zeroes: Not Supported 00:13:44.847 Deallocated Guard Field: 0xFFFF 00:13:44.847 Flush: Supported 00:13:44.847 Reservation: Supported 00:13:44.847 Namespace Sharing Capabilities: Multiple Controllers 00:13:44.847 Size (in LBAs): 131072 (0GiB) 00:13:44.847 Capacity (in LBAs): 131072 (0GiB) 00:13:44.847 Utilization (in LBAs): 131072 (0GiB) 00:13:44.847 NGUID: 13E6D9DF0C884BF59B07F400C1627759 00:13:44.847 UUID: 13e6d9df-0c88-4bf5-9b07-f400c1627759 00:13:44.847 Thin Provisioning: Not Supported 00:13:44.847 Per-NS Atomic Units: Yes 00:13:44.847 Atomic Boundary Size (Normal): 0 00:13:44.847 Atomic Boundary Size (PFail): 0 00:13:44.847 Atomic Boundary Offset: 0 00:13:44.847 Maximum Single Source Range Length: 65535 00:13:44.847 Maximum Copy Length: 65535 00:13:44.847 Maximum Source Range Count: 1 00:13:44.847 NGUID/EUI64 Never Reused: No 00:13:44.847 Namespace Write Protected: No 00:13:44.847 Number of LBA Formats: 1 00:13:44.847 Current LBA Format: LBA Format #00 00:13:44.847 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:44.847 00:13:44.847 09:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:44.847 [2024-11-20 09:47:21.710170] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.174 Initializing NVMe Controllers 00:13:50.174 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:50.174 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:50.174 Initialization complete. Launching workers. 00:13:50.174 ======================================================== 00:13:50.174 Latency(us) 00:13:50.174 Device Information : IOPS MiB/s Average min max 00:13:50.174 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33183.68 129.62 3856.79 1173.21 10385.35 00:13:50.174 ======================================================== 00:13:50.174 Total : 33183.68 129.62 3856.79 1173.21 10385.35 00:13:50.174 00:13:50.174 [2024-11-20 09:47:26.730158] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.174 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:50.174 [2024-11-20 09:47:26.997396] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:55.437 Initializing NVMe Controllers 00:13:55.437 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:55.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:55.437 Initialization complete. Launching workers. 00:13:55.437 ======================================================== 00:13:55.437 Latency(us) 00:13:55.437 Device Information : IOPS MiB/s Average min max 00:13:55.437 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15922.07 62.20 8044.25 5982.26 15975.85 00:13:55.437 ======================================================== 00:13:55.437 Total : 15922.07 62.20 8044.25 5982.26 15975.85 00:13:55.437 00:13:55.437 [2024-11-20 09:47:32.043256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:55.437 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:55.437 [2024-11-20 09:47:32.262336] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:00.703 [2024-11-20 09:47:37.328661] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:00.703 Initializing NVMe Controllers 00:14:00.703 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.703 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.703 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:00.703 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:00.703 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:00.703 Initialization complete. Launching workers. 00:14:00.703 Starting thread on core 2 00:14:00.703 Starting thread on core 3 00:14:00.703 Starting thread on core 1 00:14:00.703 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:00.961 [2024-11-20 09:47:37.657489] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.145 [2024-11-20 09:47:41.404433] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.145 Initializing NVMe Controllers 00:14:05.145 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.145 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.145 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:05.145 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:05.145 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:05.145 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:05.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:05.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:05.145 Initialization complete. Launching workers. 00:14:05.145 Starting thread on core 1 with urgent priority queue 00:14:05.145 Starting thread on core 2 with urgent priority queue 00:14:05.145 Starting thread on core 3 with urgent priority queue 00:14:05.145 Starting thread on core 0 with urgent priority queue 00:14:05.145 SPDK bdev Controller (SPDK1 ) core 0: 3753.33 IO/s 26.64 secs/100000 ios 00:14:05.145 SPDK bdev Controller (SPDK1 ) core 1: 3842.67 IO/s 26.02 secs/100000 ios 00:14:05.146 SPDK bdev Controller (SPDK1 ) core 2: 3832.67 IO/s 26.09 secs/100000 ios 00:14:05.146 SPDK bdev Controller (SPDK1 ) core 3: 3831.33 IO/s 26.10 secs/100000 ios 00:14:05.146 ======================================================== 00:14:05.146 00:14:05.146 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:05.146 [2024-11-20 09:47:41.732874] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.146 Initializing NVMe Controllers 00:14:05.146 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.146 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.146 Namespace ID: 1 size: 0GB 00:14:05.146 Initialization complete. 00:14:05.146 INFO: using host memory buffer for IO 00:14:05.146 Hello world! 00:14:05.146 [2024-11-20 09:47:41.766542] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.146 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:05.403 [2024-11-20 09:47:42.082779] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:06.338 Initializing NVMe Controllers 00:14:06.338 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:06.338 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:06.338 Initialization complete. Launching workers. 00:14:06.338 submit (in ns) avg, min, max = 6794.3, 3543.3, 4017113.3 00:14:06.338 complete (in ns) avg, min, max = 28722.2, 2063.3, 4039753.3 00:14:06.338 00:14:06.338 Submit histogram 00:14:06.338 ================ 00:14:06.338 Range in us Cumulative Count 00:14:06.338 3.532 - 3.556: 0.0078% ( 1) 00:14:06.338 3.556 - 3.579: 0.2259% ( 28) 00:14:06.338 3.579 - 3.603: 2.1812% ( 251) 00:14:06.338 3.603 - 3.627: 6.7461% ( 586) 00:14:06.338 3.627 - 3.650: 16.7329% ( 1282) 00:14:06.338 3.650 - 3.674: 28.5892% ( 1522) 00:14:06.338 3.674 - 3.698: 39.4718% ( 1397) 00:14:06.338 3.698 - 3.721: 46.5763% ( 912) 00:14:06.338 3.721 - 3.745: 50.9543% ( 562) 00:14:06.338 3.745 - 3.769: 55.1297% ( 536) 00:14:06.338 3.769 - 3.793: 59.5388% ( 566) 00:14:06.338 3.793 - 3.816: 63.2391% ( 475) 00:14:06.338 3.816 - 3.840: 66.2382% ( 385) 00:14:06.338 3.840 - 3.864: 69.6424% ( 437) 00:14:06.338 3.864 - 3.887: 73.9191% ( 549) 00:14:06.338 3.887 - 3.911: 78.9670% ( 648) 00:14:06.338 3.911 - 3.935: 83.0100% ( 519) 00:14:06.338 3.935 - 3.959: 85.9001% ( 371) 00:14:06.338 3.959 - 3.982: 87.6840% ( 229) 00:14:06.338 3.982 - 4.006: 89.3199% ( 210) 00:14:06.338 4.006 - 4.030: 90.4495% ( 145) 00:14:06.338 4.030 - 4.053: 91.5946% ( 147) 00:14:06.338 4.053 - 4.077: 92.6618% ( 137) 00:14:06.338 4.077 - 4.101: 93.6667% ( 129) 00:14:06.338 4.101 - 4.124: 94.4380% ( 99) 00:14:06.338 4.124 - 4.148: 95.2014% ( 98) 00:14:06.338 4.148 - 4.172: 95.6532% ( 58) 00:14:06.338 4.172 - 4.196: 95.9959% ( 44) 00:14:06.338 4.196 - 4.219: 96.2296% ( 30) 00:14:06.338 4.219 - 4.243: 96.4322% ( 26) 00:14:06.338 4.243 - 4.267: 96.6347% ( 26) 00:14:06.338 4.267 - 4.290: 96.7438% ( 14) 00:14:06.338 4.290 - 4.314: 96.8451% ( 13) 00:14:06.338 4.314 - 4.338: 96.9074% ( 8) 00:14:06.338 4.338 - 4.361: 96.9697% ( 8) 00:14:06.338 4.361 - 4.385: 97.0086% ( 5) 00:14:06.338 4.385 - 4.409: 97.0788% ( 9) 00:14:06.338 4.409 - 4.433: 97.1411% ( 8) 00:14:06.338 4.433 - 4.456: 97.1567% ( 2) 00:14:06.338 4.456 - 4.480: 97.1800% ( 3) 00:14:06.338 4.480 - 4.504: 97.1878% ( 1) 00:14:06.338 4.504 - 4.527: 97.2112% ( 3) 00:14:06.338 4.527 - 4.551: 97.2190% ( 1) 00:14:06.338 4.551 - 4.575: 97.2268% ( 1) 00:14:06.338 4.717 - 4.741: 97.2346% ( 1) 00:14:06.338 4.741 - 4.764: 97.2657% ( 4) 00:14:06.338 4.764 - 4.788: 97.2891% ( 3) 00:14:06.338 4.788 - 4.812: 97.3047% ( 2) 00:14:06.338 4.812 - 4.836: 97.3358% ( 4) 00:14:06.338 4.836 - 4.859: 97.3670% ( 4) 00:14:06.338 4.859 - 4.883: 97.4371% ( 9) 00:14:06.338 4.883 - 4.907: 97.4683% ( 4) 00:14:06.338 4.907 - 4.930: 97.5384% ( 9) 00:14:06.338 4.930 - 4.954: 97.5695% ( 4) 00:14:06.338 4.954 - 4.978: 97.6085% ( 5) 00:14:06.338 4.978 - 5.001: 97.6474% ( 5) 00:14:06.338 5.001 - 5.025: 97.6942% ( 6) 00:14:06.338 5.025 - 5.049: 97.7253% ( 4) 00:14:06.338 5.049 - 5.073: 97.7721% ( 6) 00:14:06.338 5.073 - 5.096: 97.8344% ( 8) 00:14:06.338 5.096 - 5.120: 97.8500% ( 2) 00:14:06.338 5.120 - 5.144: 97.8733% ( 3) 00:14:06.338 5.144 - 5.167: 97.8889% ( 2) 00:14:06.338 5.167 - 5.191: 97.9045% ( 2) 00:14:06.338 5.191 - 5.215: 97.9201% ( 2) 00:14:06.338 5.215 - 5.239: 97.9279% ( 1) 00:14:06.338 5.262 - 5.286: 97.9434% ( 2) 00:14:06.338 5.286 - 5.310: 97.9512% ( 1) 00:14:06.338 5.310 - 5.333: 97.9590% ( 1) 00:14:06.338 5.333 - 5.357: 97.9668% ( 1) 00:14:06.338 5.357 - 5.381: 97.9746% ( 1) 00:14:06.338 5.428 - 5.452: 97.9824% ( 1) 00:14:06.338 5.452 - 5.476: 97.9902% ( 1) 00:14:06.338 5.547 - 5.570: 97.9980% ( 1) 00:14:06.338 5.641 - 5.665: 98.0058% ( 1) 00:14:06.338 5.665 - 5.689: 98.0213% ( 2) 00:14:06.338 5.713 - 5.736: 98.0291% ( 1) 00:14:06.338 5.736 - 5.760: 98.0369% ( 1) 00:14:06.338 5.926 - 5.950: 98.0447% ( 1) 00:14:06.338 5.973 - 5.997: 98.0525% ( 1) 00:14:06.338 6.021 - 6.044: 98.0681% ( 2) 00:14:06.338 6.068 - 6.116: 98.0837% ( 2) 00:14:06.338 6.116 - 6.163: 98.1148% ( 4) 00:14:06.338 6.163 - 6.210: 98.1538% ( 5) 00:14:06.338 6.258 - 6.305: 98.1616% ( 1) 00:14:06.338 6.305 - 6.353: 98.1771% ( 2) 00:14:06.338 6.353 - 6.400: 98.1849% ( 1) 00:14:06.338 6.400 - 6.447: 98.1927% ( 1) 00:14:06.338 6.495 - 6.542: 98.2005% ( 1) 00:14:06.338 6.542 - 6.590: 98.2083% ( 1) 00:14:06.338 6.590 - 6.637: 98.2161% ( 1) 00:14:06.338 6.684 - 6.732: 98.2239% ( 1) 00:14:06.338 6.732 - 6.779: 98.2395% ( 2) 00:14:06.338 7.016 - 7.064: 98.2473% ( 1) 00:14:06.338 7.064 - 7.111: 98.2550% ( 1) 00:14:06.338 7.253 - 7.301: 98.2628% ( 1) 00:14:06.338 7.396 - 7.443: 98.2706% ( 1) 00:14:06.338 7.443 - 7.490: 98.2784% ( 1) 00:14:06.338 7.633 - 7.680: 98.2862% ( 1) 00:14:06.338 7.727 - 7.775: 98.3018% ( 2) 00:14:06.338 7.870 - 7.917: 98.3096% ( 1) 00:14:06.338 8.012 - 8.059: 98.3174% ( 1) 00:14:06.338 8.154 - 8.201: 98.3252% ( 1) 00:14:06.338 8.249 - 8.296: 98.3329% ( 1) 00:14:06.338 8.296 - 8.344: 98.3563% ( 3) 00:14:06.338 8.439 - 8.486: 98.3641% ( 1) 00:14:06.338 8.533 - 8.581: 98.3797% ( 2) 00:14:06.338 8.581 - 8.628: 98.4108% ( 4) 00:14:06.338 8.628 - 8.676: 98.4186% ( 1) 00:14:06.338 8.676 - 8.723: 98.4342% ( 2) 00:14:06.338 8.723 - 8.770: 98.4420% ( 1) 00:14:06.338 8.770 - 8.818: 98.4498% ( 1) 00:14:06.338 8.960 - 9.007: 98.4654% ( 2) 00:14:06.338 9.055 - 9.102: 98.4810% ( 2) 00:14:06.338 9.102 - 9.150: 98.4887% ( 1) 00:14:06.338 9.292 - 9.339: 98.4965% ( 1) 00:14:06.338 9.481 - 9.529: 98.5043% ( 1) 00:14:06.338 9.529 - 9.576: 98.5199% ( 2) 00:14:06.338 9.624 - 9.671: 98.5277% ( 1) 00:14:06.338 9.671 - 9.719: 98.5355% ( 1) 00:14:06.338 9.766 - 9.813: 98.5433% ( 1) 00:14:06.338 9.861 - 9.908: 98.5511% ( 1) 00:14:06.338 9.908 - 9.956: 98.5666% ( 2) 00:14:06.338 9.956 - 10.003: 98.5744% ( 1) 00:14:06.338 10.003 - 10.050: 98.5822% ( 1) 00:14:06.338 10.098 - 10.145: 98.5900% ( 1) 00:14:06.338 10.145 - 10.193: 98.5978% ( 1) 00:14:06.338 10.193 - 10.240: 98.6134% ( 2) 00:14:06.338 10.240 - 10.287: 98.6212% ( 1) 00:14:06.338 10.287 - 10.335: 98.6290% ( 1) 00:14:06.338 10.477 - 10.524: 98.6368% ( 1) 00:14:06.338 10.572 - 10.619: 98.6445% ( 1) 00:14:06.338 10.667 - 10.714: 98.6601% ( 2) 00:14:06.338 10.714 - 10.761: 98.6679% ( 1) 00:14:06.338 10.761 - 10.809: 98.6757% ( 1) 00:14:06.338 10.856 - 10.904: 98.6991% ( 3) 00:14:06.338 10.904 - 10.951: 98.7069% ( 1) 00:14:06.338 11.141 - 11.188: 98.7147% ( 1) 00:14:06.338 11.378 - 11.425: 98.7224% ( 1) 00:14:06.338 11.473 - 11.520: 98.7302% ( 1) 00:14:06.338 11.567 - 11.615: 98.7380% ( 1) 00:14:06.338 11.615 - 11.662: 98.7458% ( 1) 00:14:06.338 11.899 - 11.947: 98.7536% ( 1) 00:14:06.338 11.947 - 11.994: 98.7614% ( 1) 00:14:06.338 12.041 - 12.089: 98.7692% ( 1) 00:14:06.338 12.136 - 12.231: 98.7848% ( 2) 00:14:06.338 12.231 - 12.326: 98.8003% ( 2) 00:14:06.338 12.705 - 12.800: 98.8159% ( 2) 00:14:06.338 12.895 - 12.990: 98.8315% ( 2) 00:14:06.338 13.084 - 13.179: 98.8393% ( 1) 00:14:06.338 13.179 - 13.274: 98.8471% ( 1) 00:14:06.338 13.274 - 13.369: 98.8549% ( 1) 00:14:06.338 13.369 - 13.464: 98.8705% ( 2) 00:14:06.338 13.464 - 13.559: 98.8782% ( 1) 00:14:06.338 13.559 - 13.653: 98.8860% ( 1) 00:14:06.338 13.748 - 13.843: 98.8938% ( 1) 00:14:06.338 14.412 - 14.507: 98.9016% ( 1) 00:14:06.338 14.507 - 14.601: 98.9094% ( 1) 00:14:06.338 14.981 - 15.076: 98.9172% ( 1) 00:14:06.338 16.972 - 17.067: 98.9250% ( 1) 00:14:06.338 17.256 - 17.351: 98.9328% ( 1) 00:14:06.338 17.351 - 17.446: 98.9873% ( 7) 00:14:06.338 17.446 - 17.541: 99.0185% ( 4) 00:14:06.338 17.541 - 17.636: 99.0340% ( 2) 00:14:06.338 17.636 - 17.730: 99.1042% ( 9) 00:14:06.338 17.730 - 17.825: 99.1353% ( 4) 00:14:06.338 17.825 - 17.920: 99.2054% ( 9) 00:14:06.338 17.920 - 18.015: 99.2600% ( 7) 00:14:06.338 18.015 - 18.110: 99.3067% ( 6) 00:14:06.338 18.110 - 18.204: 99.4002% ( 12) 00:14:06.338 18.204 - 18.299: 99.4313% ( 4) 00:14:06.338 18.299 - 18.394: 99.4781% ( 6) 00:14:06.338 18.394 - 18.489: 99.5560% ( 10) 00:14:06.338 18.489 - 18.584: 99.6183% ( 8) 00:14:06.338 18.584 - 18.679: 99.6417% ( 3) 00:14:06.338 18.679 - 18.773: 99.6962% ( 7) 00:14:06.338 18.773 - 18.868: 99.7351% ( 5) 00:14:06.338 18.868 - 18.963: 99.7897% ( 7) 00:14:06.338 18.963 - 19.058: 99.8208% ( 4) 00:14:06.338 19.058 - 19.153: 99.8364% ( 2) 00:14:06.338 19.153 - 19.247: 99.8442% ( 1) 00:14:06.338 19.247 - 19.342: 99.8520% ( 1) 00:14:06.338 19.342 - 19.437: 99.8598% ( 1) 00:14:06.338 19.437 - 19.532: 99.8754% ( 2) 00:14:06.338 20.385 - 20.480: 99.8832% ( 1) 00:14:06.338 20.575 - 20.670: 99.8909% ( 1) 00:14:06.338 23.135 - 23.230: 99.8987% ( 1) 00:14:06.338 23.609 - 23.704: 99.9065% ( 1) 00:14:06.338 23.893 - 23.988: 99.9143% ( 1) 00:14:06.338 24.841 - 25.031: 99.9221% ( 1) 00:14:06.338 25.221 - 25.410: 99.9299% ( 1) 00:14:06.338 3980.705 - 4004.978: 99.9844% ( 7) 00:14:06.338 4004.978 - 4029.250: 100.0000% ( 2) 00:14:06.338 00:14:06.338 Complete histogram 00:14:06.338 ================== 00:14:06.338 Range in us Cumulative Count 00:14:06.338 2.062 - 2.074: 6.6137% ( 849) 00:14:06.338 2.074 - 2.086: 43.8654% ( 4782) 00:14:06.338 2.086 - 2.098: 46.7555% ( 371) 00:14:06.339 2.098 - 2.110: 52.2630% ( 707) 00:14:06.339 2.110 - 2.121: 59.1961% ( 890) 00:14:06.339 2.121 - 2.133: 60.8008% ( 206) 00:14:06.339 2.133 - 2.145: 68.0221% ( 927) 00:14:06.339 2.145 - 2.157: 76.7936% ( 1126) 00:14:06.339 2.157 - 2.169: 77.5571% ( 98) 00:14:06.339 2.169 - 2.181: 79.5591% ( 257) 00:14:06.339 2.181 - 2.193: 81.2651% ( 219) 00:14:06.339 2.193 - 2.204: 81.7559% ( 63) 00:14:06.339 2.204 - 2.216: 84.4979% ( 352) 00:14:06.339 2.216 - 2.228: 88.8993% ( 565) 00:14:06.339 2.228 - 2.240: 90.9403% ( 262) 00:14:06.339 2.240 - 2.252: 92.2723% ( 171) 00:14:06.339 2.252 - 2.264: 93.1059% ( 107) 00:14:06.339 2.264 - 2.276: 93.3629% ( 33) 00:14:06.339 2.276 - 2.287: 93.7758% ( 53) 00:14:06.339 2.287 - 2.299: 94.2666% ( 63) 00:14:06.339 2.299 - 2.311: 94.8275% ( 72) 00:14:06.339 2.311 - 2.323: 95.2871% ( 59) 00:14:06.339 2.323 - 2.335: 95.3961% ( 14) 00:14:06.339 2.335 - 2.347: 95.4429% ( 6) 00:14:06.339 2.347 - 2.359: 95.5363% ( 12) 00:14:06.339 2.359 - 2.370: 95.6376% ( 13) 00:14:06.339 2.370 - 2.382: 95.8791% ( 31) 00:14:06.339 2.382 - 2.394: 96.2141% ( 43) 00:14:06.339 2.394 - 2.406: 96.5802% ( 47) 00:14:06.339 2.406 - 2.418: 96.7905% ( 27) 00:14:06.339 2.418 - 2.430: 96.9541% ( 21) 00:14:06.339 2.430 - 2.441: 97.1644% ( 27) 00:14:06.339 2.441 - 2.453: 97.3436% ( 23) 00:14:06.339 2.453 - 2.465: 97.5228% ( 23) 00:14:06.339 2.465 - 2.477: 97.6942% ( 22) 00:14:06.339 2.477 - 2.489: 97.8032% ( 14) 00:14:06.339 2.489 - 2.501: 97.9434% ( 18) 00:14:06.339 2.501 - 2.513: 98.0759% ( 17) 00:14:06.339 2.513 - 2.524: 98.1304% ( 7) 00:14:06.339 2.524 - 2.536: 98.2161% ( 11) 00:14:06.339 2.536 - 2.548: 98.3018% ( 11) 00:14:06.339 2.548 - 2.560: 98.3485% ( 6) 00:14:06.339 2.560 - 2.572: 98.3641% ( 2) 00:14:06.339 2.572 - 2.584: 98.3719% ( 1) 00:14:06.339 2.584 - 2.596: 98.4031% ( 4) 00:14:06.339 2.596 - 2.607: 98.4186% ( 2) 00:14:06.339 2.619 - 2.631: 98.4342% ( 2) 00:14:06.339 2.631 - 2.643: 98.4498% ( 2) 00:14:06.339 2.643 - 2.655: 98.4576% ( 1) 00:14:06.339 2.714 - 2.726: 98.4654% ( 1) 00:14:06.339 2.927 - 2.939: 98.4732% ( 1) 00:14:06.339 3.437 - 3.461: 98.4810% ( 1) 00:14:06.339 3.484 - 3.508: 98.5043% ( 3) 00:14:06.339 3.508 - 3.532: 98.5121% ( 1) 00:14:06.339 3.532 - 3.556: 98.5433% ( 4) 00:14:06.339 3.556 - 3.579: 98.5511% ( 1) 00:14:06.339 3.627 - 3.650: 98.5666% ( 2) 00:14:06.339 3.650 - 3.674: 98.5744% ( 1) 00:14:06.339 3.674 - 3.698: 9[2024-11-20 09:47:43.105869] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:06.339 8.5822% ( 1) 00:14:06.339 3.698 - 3.721: 98.5900% ( 1) 00:14:06.339 3.935 - 3.959: 98.5978% ( 1) 00:14:06.339 3.959 - 3.982: 98.6056% ( 1) 00:14:06.339 4.077 - 4.101: 98.6134% ( 1) 00:14:06.339 4.101 - 4.124: 98.6212% ( 1) 00:14:06.339 5.025 - 5.049: 98.6290% ( 1) 00:14:06.339 5.594 - 5.618: 98.6368% ( 1) 00:14:06.339 5.855 - 5.879: 98.6445% ( 1) 00:14:06.339 5.997 - 6.021: 98.6523% ( 1) 00:14:06.339 6.542 - 6.590: 98.6601% ( 1) 00:14:06.339 6.637 - 6.684: 98.6679% ( 1) 00:14:06.339 6.732 - 6.779: 98.6757% ( 1) 00:14:06.339 6.827 - 6.874: 98.6835% ( 1) 00:14:06.339 7.159 - 7.206: 98.6913% ( 1) 00:14:06.339 7.396 - 7.443: 98.6991% ( 1) 00:14:06.339 7.443 - 7.490: 98.7147% ( 2) 00:14:06.339 7.538 - 7.585: 98.7224% ( 1) 00:14:06.339 7.870 - 7.917: 98.7302% ( 1) 00:14:06.339 8.012 - 8.059: 98.7380% ( 1) 00:14:06.339 8.107 - 8.154: 98.7458% ( 1) 00:14:06.339 8.154 - 8.201: 98.7536% ( 1) 00:14:06.339 9.007 - 9.055: 98.7614% ( 1) 00:14:06.339 9.102 - 9.150: 98.7692% ( 1) 00:14:06.339 15.739 - 15.834: 98.7770% ( 1) 00:14:06.339 15.834 - 15.929: 98.7926% ( 2) 00:14:06.339 15.929 - 16.024: 98.8549% ( 8) 00:14:06.339 16.024 - 16.119: 98.9016% ( 6) 00:14:06.339 16.119 - 16.213: 98.9250% ( 3) 00:14:06.339 16.213 - 16.308: 98.9561% ( 4) 00:14:06.339 16.308 - 16.403: 99.0029% ( 6) 00:14:06.339 16.403 - 16.498: 99.0263% ( 3) 00:14:06.339 16.498 - 16.593: 99.0730% ( 6) 00:14:06.339 16.593 - 16.687: 99.1042% ( 4) 00:14:06.339 16.687 - 16.782: 99.1431% ( 5) 00:14:06.339 16.782 - 16.877: 99.1821% ( 5) 00:14:06.339 16.877 - 16.972: 99.2054% ( 3) 00:14:06.339 16.972 - 17.067: 99.2132% ( 1) 00:14:06.339 17.067 - 17.161: 99.2210% ( 1) 00:14:06.339 17.161 - 17.256: 99.2288% ( 1) 00:14:06.339 17.351 - 17.446: 99.2522% ( 3) 00:14:06.339 17.730 - 17.825: 99.2600% ( 1) 00:14:06.339 17.825 - 17.920: 99.2677% ( 1) 00:14:06.339 17.920 - 18.015: 99.2833% ( 2) 00:14:06.339 18.015 - 18.110: 99.2911% ( 1) 00:14:06.339 18.204 - 18.299: 99.3067% ( 2) 00:14:06.339 18.489 - 18.584: 99.3223% ( 2) 00:14:06.339 18.584 - 18.679: 99.3301% ( 1) 00:14:06.339 25.600 - 25.790: 99.3379% ( 1) 00:14:06.339 3980.705 - 4004.978: 99.8208% ( 62) 00:14:06.339 4004.978 - 4029.250: 99.9922% ( 22) 00:14:06.339 4029.250 - 4053.523: 100.0000% ( 1) 00:14:06.339 00:14:06.339 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:06.339 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:06.339 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:06.339 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:06.339 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:06.597 [ 00:14:06.597 { 00:14:06.597 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.597 "subtype": "Discovery", 00:14:06.597 "listen_addresses": [], 00:14:06.597 "allow_any_host": true, 00:14:06.597 "hosts": [] 00:14:06.597 }, 00:14:06.597 { 00:14:06.597 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.597 "subtype": "NVMe", 00:14:06.597 "listen_addresses": [ 00:14:06.597 { 00:14:06.598 "trtype": "VFIOUSER", 00:14:06.598 "adrfam": "IPv4", 00:14:06.598 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.598 "trsvcid": "0" 00:14:06.598 } 00:14:06.598 ], 00:14:06.598 "allow_any_host": true, 00:14:06.598 "hosts": [], 00:14:06.598 "serial_number": "SPDK1", 00:14:06.598 "model_number": "SPDK bdev Controller", 00:14:06.598 "max_namespaces": 32, 00:14:06.598 "min_cntlid": 1, 00:14:06.598 "max_cntlid": 65519, 00:14:06.598 "namespaces": [ 00:14:06.598 { 00:14:06.598 "nsid": 1, 00:14:06.598 "bdev_name": "Malloc1", 00:14:06.598 "name": "Malloc1", 00:14:06.598 "nguid": "13E6D9DF0C884BF59B07F400C1627759", 00:14:06.598 "uuid": "13e6d9df-0c88-4bf5-9b07-f400c1627759" 00:14:06.598 } 00:14:06.598 ] 00:14:06.598 }, 00:14:06.598 { 00:14:06.598 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.598 "subtype": "NVMe", 00:14:06.598 "listen_addresses": [ 00:14:06.598 { 00:14:06.598 "trtype": "VFIOUSER", 00:14:06.598 "adrfam": "IPv4", 00:14:06.598 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.598 "trsvcid": "0" 00:14:06.598 } 00:14:06.598 ], 00:14:06.598 "allow_any_host": true, 00:14:06.598 "hosts": [], 00:14:06.598 "serial_number": "SPDK2", 00:14:06.598 "model_number": "SPDK bdev Controller", 00:14:06.598 "max_namespaces": 32, 00:14:06.598 "min_cntlid": 1, 00:14:06.598 "max_cntlid": 65519, 00:14:06.598 "namespaces": [ 00:14:06.598 { 00:14:06.598 "nsid": 1, 00:14:06.598 "bdev_name": "Malloc2", 00:14:06.598 "name": "Malloc2", 00:14:06.598 "nguid": "DB46D4A59FC049368E7F08FA4966356D", 00:14:06.598 "uuid": "db46d4a5-9fc0-4936-8e7f-08fa4966356d" 00:14:06.598 } 00:14:06.598 ] 00:14:06.598 } 00:14:06.598 ] 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3713424 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:06.598 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:06.856 [2024-11-20 09:47:43.607836] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:06.856 Malloc3 00:14:06.856 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:07.114 [2024-11-20 09:47:44.009808] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:07.372 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:07.372 Asynchronous Event Request test 00:14:07.372 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:07.372 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:07.372 Registering asynchronous event callbacks... 00:14:07.372 Starting namespace attribute notice tests for all controllers... 00:14:07.372 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:07.372 aer_cb - Changed Namespace 00:14:07.372 Cleaning up... 00:14:07.372 [ 00:14:07.372 { 00:14:07.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:07.372 "subtype": "Discovery", 00:14:07.372 "listen_addresses": [], 00:14:07.372 "allow_any_host": true, 00:14:07.372 "hosts": [] 00:14:07.372 }, 00:14:07.372 { 00:14:07.372 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:07.372 "subtype": "NVMe", 00:14:07.372 "listen_addresses": [ 00:14:07.372 { 00:14:07.372 "trtype": "VFIOUSER", 00:14:07.372 "adrfam": "IPv4", 00:14:07.372 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:07.372 "trsvcid": "0" 00:14:07.372 } 00:14:07.372 ], 00:14:07.372 "allow_any_host": true, 00:14:07.372 "hosts": [], 00:14:07.372 "serial_number": "SPDK1", 00:14:07.372 "model_number": "SPDK bdev Controller", 00:14:07.372 "max_namespaces": 32, 00:14:07.372 "min_cntlid": 1, 00:14:07.372 "max_cntlid": 65519, 00:14:07.372 "namespaces": [ 00:14:07.372 { 00:14:07.372 "nsid": 1, 00:14:07.372 "bdev_name": "Malloc1", 00:14:07.372 "name": "Malloc1", 00:14:07.372 "nguid": "13E6D9DF0C884BF59B07F400C1627759", 00:14:07.372 "uuid": "13e6d9df-0c88-4bf5-9b07-f400c1627759" 00:14:07.372 }, 00:14:07.372 { 00:14:07.372 "nsid": 2, 00:14:07.372 "bdev_name": "Malloc3", 00:14:07.372 "name": "Malloc3", 00:14:07.372 "nguid": "6D19AE7D7C0041DCB98F6148D5D7D5AA", 00:14:07.372 "uuid": "6d19ae7d-7c00-41dc-b98f-6148d5d7d5aa" 00:14:07.372 } 00:14:07.372 ] 00:14:07.372 }, 00:14:07.372 { 00:14:07.372 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:07.372 "subtype": "NVMe", 00:14:07.372 "listen_addresses": [ 00:14:07.372 { 00:14:07.372 "trtype": "VFIOUSER", 00:14:07.372 "adrfam": "IPv4", 00:14:07.372 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:07.372 "trsvcid": "0" 00:14:07.372 } 00:14:07.372 ], 00:14:07.372 "allow_any_host": true, 00:14:07.372 "hosts": [], 00:14:07.372 "serial_number": "SPDK2", 00:14:07.372 "model_number": "SPDK bdev Controller", 00:14:07.372 "max_namespaces": 32, 00:14:07.372 "min_cntlid": 1, 00:14:07.372 "max_cntlid": 65519, 00:14:07.372 "namespaces": [ 00:14:07.372 { 00:14:07.372 "nsid": 1, 00:14:07.372 "bdev_name": "Malloc2", 00:14:07.372 "name": "Malloc2", 00:14:07.372 "nguid": "DB46D4A59FC049368E7F08FA4966356D", 00:14:07.372 "uuid": "db46d4a5-9fc0-4936-8e7f-08fa4966356d" 00:14:07.372 } 00:14:07.372 ] 00:14:07.372 } 00:14:07.372 ] 00:14:07.662 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3713424 00:14:07.662 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:07.662 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:07.662 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:07.662 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:07.662 [2024-11-20 09:47:44.308632] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:14:07.662 [2024-11-20 09:47:44.308688] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713557 ] 00:14:07.662 [2024-11-20 09:47:44.360200] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:07.662 [2024-11-20 09:47:44.367623] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:07.662 [2024-11-20 09:47:44.367659] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc4a30b8000 00:14:07.662 [2024-11-20 09:47:44.368627] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.662 [2024-11-20 09:47:44.369630] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.662 [2024-11-20 09:47:44.370633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.662 [2024-11-20 09:47:44.371642] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:07.662 [2024-11-20 09:47:44.372650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:07.662 [2024-11-20 09:47:44.373656] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.662 [2024-11-20 09:47:44.374664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:07.662 [2024-11-20 09:47:44.375666] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.662 [2024-11-20 09:47:44.376688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:07.662 [2024-11-20 09:47:44.376710] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc4a30ad000 00:14:07.662 [2024-11-20 09:47:44.377826] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:07.662 [2024-11-20 09:47:44.393960] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:07.662 [2024-11-20 09:47:44.394002] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:07.662 [2024-11-20 09:47:44.396086] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:07.663 [2024-11-20 09:47:44.396145] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:07.663 [2024-11-20 09:47:44.396237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:07.663 [2024-11-20 09:47:44.396264] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:07.663 [2024-11-20 09:47:44.396275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:07.663 [2024-11-20 09:47:44.397315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:07.663 [2024-11-20 09:47:44.397337] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:07.663 [2024-11-20 09:47:44.397350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:07.663 [2024-11-20 09:47:44.398097] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:07.663 [2024-11-20 09:47:44.398117] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:07.663 [2024-11-20 09:47:44.398131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:07.663 [2024-11-20 09:47:44.401327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:07.663 [2024-11-20 09:47:44.401349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:07.663 [2024-11-20 09:47:44.402122] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:07.663 [2024-11-20 09:47:44.402141] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:07.663 [2024-11-20 09:47:44.402151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:07.663 [2024-11-20 09:47:44.402162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:07.663 [2024-11-20 09:47:44.402272] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:07.663 [2024-11-20 09:47:44.402295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:07.663 [2024-11-20 09:47:44.402312] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:07.663 [2024-11-20 09:47:44.403127] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:07.663 [2024-11-20 09:47:44.404136] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:07.663 [2024-11-20 09:47:44.405154] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:07.663 [2024-11-20 09:47:44.406132] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.663 [2024-11-20 09:47:44.406213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:07.663 [2024-11-20 09:47:44.407148] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:07.663 [2024-11-20 09:47:44.407169] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:07.663 [2024-11-20 09:47:44.407183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.407209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:07.663 [2024-11-20 09:47:44.407222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.407244] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:07.663 [2024-11-20 09:47:44.407270] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:07.663 [2024-11-20 09:47:44.407276] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.663 [2024-11-20 09:47:44.407297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:07.663 [2024-11-20 09:47:44.412322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:07.663 [2024-11-20 09:47:44.412348] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:07.663 [2024-11-20 09:47:44.412357] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:07.663 [2024-11-20 09:47:44.412365] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:07.663 [2024-11-20 09:47:44.412374] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:07.663 [2024-11-20 09:47:44.412387] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:07.663 [2024-11-20 09:47:44.412396] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:07.663 [2024-11-20 09:47:44.412405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.412421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.412439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:07.663 [2024-11-20 09:47:44.420313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:07.663 [2024-11-20 09:47:44.420339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.663 [2024-11-20 09:47:44.420354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.663 [2024-11-20 09:47:44.420366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.663 [2024-11-20 09:47:44.420379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.663 [2024-11-20 09:47:44.420388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.420401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.420415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:07.663 [2024-11-20 09:47:44.428330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:07.663 [2024-11-20 09:47:44.428358] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:07.663 [2024-11-20 09:47:44.428370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.428382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.428393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.428407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:07.663 [2024-11-20 09:47:44.436315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:07.663 [2024-11-20 09:47:44.436392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.436409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.436423] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:07.663 [2024-11-20 09:47:44.436431] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:07.663 [2024-11-20 09:47:44.436437] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.663 [2024-11-20 09:47:44.436447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:07.663 [2024-11-20 09:47:44.444312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:07.663 [2024-11-20 09:47:44.444336] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:07.663 [2024-11-20 09:47:44.444361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.444378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:07.663 [2024-11-20 09:47:44.444391] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:07.663 [2024-11-20 09:47:44.444400] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:07.663 [2024-11-20 09:47:44.444406] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.664 [2024-11-20 09:47:44.444416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.452313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.452343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.452360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.452374] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:07.664 [2024-11-20 09:47:44.452382] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:07.664 [2024-11-20 09:47:44.452388] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.664 [2024-11-20 09:47:44.452401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.460314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.460337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.460350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.460365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.460377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.460385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.460394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.460402] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:07.664 [2024-11-20 09:47:44.460410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:07.664 [2024-11-20 09:47:44.460418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:07.664 [2024-11-20 09:47:44.460444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.468315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.468342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.476329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.476355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.484325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.484352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.492312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.492346] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:07.664 [2024-11-20 09:47:44.492358] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:07.664 [2024-11-20 09:47:44.492364] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:07.664 [2024-11-20 09:47:44.492370] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:07.664 [2024-11-20 09:47:44.492376] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:07.664 [2024-11-20 09:47:44.492385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:07.664 [2024-11-20 09:47:44.492397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:07.664 [2024-11-20 09:47:44.492409] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:07.664 [2024-11-20 09:47:44.492416] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.664 [2024-11-20 09:47:44.492425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.492436] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:07.664 [2024-11-20 09:47:44.492444] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:07.664 [2024-11-20 09:47:44.492450] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.664 [2024-11-20 09:47:44.492459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.492471] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:07.664 [2024-11-20 09:47:44.492479] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:07.664 [2024-11-20 09:47:44.492484] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.664 [2024-11-20 09:47:44.492493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:07.664 [2024-11-20 09:47:44.500316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.500345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.500362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:07.664 [2024-11-20 09:47:44.500375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:07.664 ===================================================== 00:14:07.664 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:07.664 ===================================================== 00:14:07.664 Controller Capabilities/Features 00:14:07.664 ================================ 00:14:07.664 Vendor ID: 4e58 00:14:07.664 Subsystem Vendor ID: 4e58 00:14:07.664 Serial Number: SPDK2 00:14:07.664 Model Number: SPDK bdev Controller 00:14:07.664 Firmware Version: 25.01 00:14:07.664 Recommended Arb Burst: 6 00:14:07.664 IEEE OUI Identifier: 8d 6b 50 00:14:07.664 Multi-path I/O 00:14:07.664 May have multiple subsystem ports: Yes 00:14:07.664 May have multiple controllers: Yes 00:14:07.664 Associated with SR-IOV VF: No 00:14:07.664 Max Data Transfer Size: 131072 00:14:07.664 Max Number of Namespaces: 32 00:14:07.664 Max Number of I/O Queues: 127 00:14:07.664 NVMe Specification Version (VS): 1.3 00:14:07.664 NVMe Specification Version (Identify): 1.3 00:14:07.664 Maximum Queue Entries: 256 00:14:07.664 Contiguous Queues Required: Yes 00:14:07.664 Arbitration Mechanisms Supported 00:14:07.664 Weighted Round Robin: Not Supported 00:14:07.664 Vendor Specific: Not Supported 00:14:07.664 Reset Timeout: 15000 ms 00:14:07.664 Doorbell Stride: 4 bytes 00:14:07.664 NVM Subsystem Reset: Not Supported 00:14:07.664 Command Sets Supported 00:14:07.664 NVM Command Set: Supported 00:14:07.664 Boot Partition: Not Supported 00:14:07.664 Memory Page Size Minimum: 4096 bytes 00:14:07.664 Memory Page Size Maximum: 4096 bytes 00:14:07.664 Persistent Memory Region: Not Supported 00:14:07.664 Optional Asynchronous Events Supported 00:14:07.664 Namespace Attribute Notices: Supported 00:14:07.664 Firmware Activation Notices: Not Supported 00:14:07.664 ANA Change Notices: Not Supported 00:14:07.664 PLE Aggregate Log Change Notices: Not Supported 00:14:07.664 LBA Status Info Alert Notices: Not Supported 00:14:07.664 EGE Aggregate Log Change Notices: Not Supported 00:14:07.664 Normal NVM Subsystem Shutdown event: Not Supported 00:14:07.664 Zone Descriptor Change Notices: Not Supported 00:14:07.664 Discovery Log Change Notices: Not Supported 00:14:07.664 Controller Attributes 00:14:07.664 128-bit Host Identifier: Supported 00:14:07.664 Non-Operational Permissive Mode: Not Supported 00:14:07.664 NVM Sets: Not Supported 00:14:07.664 Read Recovery Levels: Not Supported 00:14:07.664 Endurance Groups: Not Supported 00:14:07.664 Predictable Latency Mode: Not Supported 00:14:07.664 Traffic Based Keep ALive: Not Supported 00:14:07.664 Namespace Granularity: Not Supported 00:14:07.664 SQ Associations: Not Supported 00:14:07.664 UUID List: Not Supported 00:14:07.664 Multi-Domain Subsystem: Not Supported 00:14:07.664 Fixed Capacity Management: Not Supported 00:14:07.664 Variable Capacity Management: Not Supported 00:14:07.664 Delete Endurance Group: Not Supported 00:14:07.664 Delete NVM Set: Not Supported 00:14:07.665 Extended LBA Formats Supported: Not Supported 00:14:07.665 Flexible Data Placement Supported: Not Supported 00:14:07.665 00:14:07.665 Controller Memory Buffer Support 00:14:07.665 ================================ 00:14:07.665 Supported: No 00:14:07.665 00:14:07.665 Persistent Memory Region Support 00:14:07.665 ================================ 00:14:07.665 Supported: No 00:14:07.665 00:14:07.665 Admin Command Set Attributes 00:14:07.665 ============================ 00:14:07.665 Security Send/Receive: Not Supported 00:14:07.665 Format NVM: Not Supported 00:14:07.665 Firmware Activate/Download: Not Supported 00:14:07.665 Namespace Management: Not Supported 00:14:07.665 Device Self-Test: Not Supported 00:14:07.665 Directives: Not Supported 00:14:07.665 NVMe-MI: Not Supported 00:14:07.665 Virtualization Management: Not Supported 00:14:07.665 Doorbell Buffer Config: Not Supported 00:14:07.665 Get LBA Status Capability: Not Supported 00:14:07.665 Command & Feature Lockdown Capability: Not Supported 00:14:07.665 Abort Command Limit: 4 00:14:07.665 Async Event Request Limit: 4 00:14:07.665 Number of Firmware Slots: N/A 00:14:07.665 Firmware Slot 1 Read-Only: N/A 00:14:07.665 Firmware Activation Without Reset: N/A 00:14:07.665 Multiple Update Detection Support: N/A 00:14:07.665 Firmware Update Granularity: No Information Provided 00:14:07.665 Per-Namespace SMART Log: No 00:14:07.665 Asymmetric Namespace Access Log Page: Not Supported 00:14:07.665 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:07.665 Command Effects Log Page: Supported 00:14:07.665 Get Log Page Extended Data: Supported 00:14:07.665 Telemetry Log Pages: Not Supported 00:14:07.665 Persistent Event Log Pages: Not Supported 00:14:07.665 Supported Log Pages Log Page: May Support 00:14:07.665 Commands Supported & Effects Log Page: Not Supported 00:14:07.665 Feature Identifiers & Effects Log Page:May Support 00:14:07.665 NVMe-MI Commands & Effects Log Page: May Support 00:14:07.665 Data Area 4 for Telemetry Log: Not Supported 00:14:07.665 Error Log Page Entries Supported: 128 00:14:07.665 Keep Alive: Supported 00:14:07.665 Keep Alive Granularity: 10000 ms 00:14:07.665 00:14:07.665 NVM Command Set Attributes 00:14:07.665 ========================== 00:14:07.665 Submission Queue Entry Size 00:14:07.665 Max: 64 00:14:07.665 Min: 64 00:14:07.665 Completion Queue Entry Size 00:14:07.665 Max: 16 00:14:07.665 Min: 16 00:14:07.665 Number of Namespaces: 32 00:14:07.665 Compare Command: Supported 00:14:07.665 Write Uncorrectable Command: Not Supported 00:14:07.665 Dataset Management Command: Supported 00:14:07.665 Write Zeroes Command: Supported 00:14:07.665 Set Features Save Field: Not Supported 00:14:07.665 Reservations: Not Supported 00:14:07.665 Timestamp: Not Supported 00:14:07.665 Copy: Supported 00:14:07.665 Volatile Write Cache: Present 00:14:07.665 Atomic Write Unit (Normal): 1 00:14:07.665 Atomic Write Unit (PFail): 1 00:14:07.665 Atomic Compare & Write Unit: 1 00:14:07.665 Fused Compare & Write: Supported 00:14:07.665 Scatter-Gather List 00:14:07.665 SGL Command Set: Supported (Dword aligned) 00:14:07.665 SGL Keyed: Not Supported 00:14:07.665 SGL Bit Bucket Descriptor: Not Supported 00:14:07.665 SGL Metadata Pointer: Not Supported 00:14:07.665 Oversized SGL: Not Supported 00:14:07.665 SGL Metadata Address: Not Supported 00:14:07.665 SGL Offset: Not Supported 00:14:07.665 Transport SGL Data Block: Not Supported 00:14:07.665 Replay Protected Memory Block: Not Supported 00:14:07.665 00:14:07.665 Firmware Slot Information 00:14:07.665 ========================= 00:14:07.665 Active slot: 1 00:14:07.665 Slot 1 Firmware Revision: 25.01 00:14:07.665 00:14:07.665 00:14:07.665 Commands Supported and Effects 00:14:07.665 ============================== 00:14:07.665 Admin Commands 00:14:07.665 -------------- 00:14:07.665 Get Log Page (02h): Supported 00:14:07.665 Identify (06h): Supported 00:14:07.665 Abort (08h): Supported 00:14:07.665 Set Features (09h): Supported 00:14:07.665 Get Features (0Ah): Supported 00:14:07.665 Asynchronous Event Request (0Ch): Supported 00:14:07.665 Keep Alive (18h): Supported 00:14:07.665 I/O Commands 00:14:07.665 ------------ 00:14:07.665 Flush (00h): Supported LBA-Change 00:14:07.665 Write (01h): Supported LBA-Change 00:14:07.665 Read (02h): Supported 00:14:07.665 Compare (05h): Supported 00:14:07.665 Write Zeroes (08h): Supported LBA-Change 00:14:07.665 Dataset Management (09h): Supported LBA-Change 00:14:07.665 Copy (19h): Supported LBA-Change 00:14:07.665 00:14:07.665 Error Log 00:14:07.665 ========= 00:14:07.665 00:14:07.665 Arbitration 00:14:07.665 =========== 00:14:07.665 Arbitration Burst: 1 00:14:07.665 00:14:07.665 Power Management 00:14:07.665 ================ 00:14:07.665 Number of Power States: 1 00:14:07.665 Current Power State: Power State #0 00:14:07.665 Power State #0: 00:14:07.665 Max Power: 0.00 W 00:14:07.665 Non-Operational State: Operational 00:14:07.665 Entry Latency: Not Reported 00:14:07.665 Exit Latency: Not Reported 00:14:07.665 Relative Read Throughput: 0 00:14:07.665 Relative Read Latency: 0 00:14:07.665 Relative Write Throughput: 0 00:14:07.665 Relative Write Latency: 0 00:14:07.665 Idle Power: Not Reported 00:14:07.665 Active Power: Not Reported 00:14:07.665 Non-Operational Permissive Mode: Not Supported 00:14:07.665 00:14:07.665 Health Information 00:14:07.665 ================== 00:14:07.665 Critical Warnings: 00:14:07.665 Available Spare Space: OK 00:14:07.665 Temperature: OK 00:14:07.665 Device Reliability: OK 00:14:07.665 Read Only: No 00:14:07.665 Volatile Memory Backup: OK 00:14:07.665 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:07.665 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:07.665 Available Spare: 0% 00:14:07.665 Available Sp[2024-11-20 09:47:44.500493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:07.665 [2024-11-20 09:47:44.508313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:07.665 [2024-11-20 09:47:44.508363] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:07.665 [2024-11-20 09:47:44.508393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.665 [2024-11-20 09:47:44.508405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.665 [2024-11-20 09:47:44.508416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.665 [2024-11-20 09:47:44.508426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.665 [2024-11-20 09:47:44.508518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:07.665 [2024-11-20 09:47:44.508541] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:07.665 [2024-11-20 09:47:44.509521] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.665 [2024-11-20 09:47:44.509609] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:07.665 [2024-11-20 09:47:44.509640] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:07.665 [2024-11-20 09:47:44.510533] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:07.665 [2024-11-20 09:47:44.510559] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:07.665 [2024-11-20 09:47:44.510627] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:07.665 [2024-11-20 09:47:44.513315] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:07.923 are Threshold: 0% 00:14:07.923 Life Percentage Used: 0% 00:14:07.923 Data Units Read: 0 00:14:07.923 Data Units Written: 0 00:14:07.923 Host Read Commands: 0 00:14:07.923 Host Write Commands: 0 00:14:07.923 Controller Busy Time: 0 minutes 00:14:07.923 Power Cycles: 0 00:14:07.923 Power On Hours: 0 hours 00:14:07.923 Unsafe Shutdowns: 0 00:14:07.923 Unrecoverable Media Errors: 0 00:14:07.923 Lifetime Error Log Entries: 0 00:14:07.923 Warning Temperature Time: 0 minutes 00:14:07.923 Critical Temperature Time: 0 minutes 00:14:07.923 00:14:07.923 Number of Queues 00:14:07.923 ================ 00:14:07.923 Number of I/O Submission Queues: 127 00:14:07.923 Number of I/O Completion Queues: 127 00:14:07.923 00:14:07.923 Active Namespaces 00:14:07.923 ================= 00:14:07.923 Namespace ID:1 00:14:07.923 Error Recovery Timeout: Unlimited 00:14:07.923 Command Set Identifier: NVM (00h) 00:14:07.923 Deallocate: Supported 00:14:07.923 Deallocated/Unwritten Error: Not Supported 00:14:07.923 Deallocated Read Value: Unknown 00:14:07.923 Deallocate in Write Zeroes: Not Supported 00:14:07.923 Deallocated Guard Field: 0xFFFF 00:14:07.923 Flush: Supported 00:14:07.923 Reservation: Supported 00:14:07.923 Namespace Sharing Capabilities: Multiple Controllers 00:14:07.923 Size (in LBAs): 131072 (0GiB) 00:14:07.923 Capacity (in LBAs): 131072 (0GiB) 00:14:07.923 Utilization (in LBAs): 131072 (0GiB) 00:14:07.923 NGUID: DB46D4A59FC049368E7F08FA4966356D 00:14:07.923 UUID: db46d4a5-9fc0-4936-8e7f-08fa4966356d 00:14:07.923 Thin Provisioning: Not Supported 00:14:07.923 Per-NS Atomic Units: Yes 00:14:07.923 Atomic Boundary Size (Normal): 0 00:14:07.923 Atomic Boundary Size (PFail): 0 00:14:07.923 Atomic Boundary Offset: 0 00:14:07.923 Maximum Single Source Range Length: 65535 00:14:07.923 Maximum Copy Length: 65535 00:14:07.923 Maximum Source Range Count: 1 00:14:07.923 NGUID/EUI64 Never Reused: No 00:14:07.923 Namespace Write Protected: No 00:14:07.923 Number of LBA Formats: 1 00:14:07.923 Current LBA Format: LBA Format #00 00:14:07.923 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:07.923 00:14:07.923 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:07.923 [2024-11-20 09:47:44.766087] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.212 Initializing NVMe Controllers 00:14:13.212 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:13.212 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:13.212 Initialization complete. Launching workers. 00:14:13.212 ======================================================== 00:14:13.212 Latency(us) 00:14:13.212 Device Information : IOPS MiB/s Average min max 00:14:13.212 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33796.85 132.02 3786.91 1182.90 8270.36 00:14:13.212 ======================================================== 00:14:13.212 Total : 33796.85 132.02 3786.91 1182.90 8270.36 00:14:13.212 00:14:13.212 [2024-11-20 09:47:49.870682] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.212 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:13.581 [2024-11-20 09:47:50.138475] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.843 Initializing NVMe Controllers 00:14:18.843 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.843 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:18.843 Initialization complete. Launching workers. 00:14:18.843 ======================================================== 00:14:18.843 Latency(us) 00:14:18.843 Device Information : IOPS MiB/s Average min max 00:14:18.843 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31081.84 121.41 4117.45 1197.34 9727.27 00:14:18.843 ======================================================== 00:14:18.843 Total : 31081.84 121.41 4117.45 1197.34 9727.27 00:14:18.843 00:14:18.843 [2024-11-20 09:47:55.158163] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:18.843 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:18.843 [2024-11-20 09:47:55.395090] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.106 [2024-11-20 09:48:00.541440] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.106 Initializing NVMe Controllers 00:14:24.106 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:24.106 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:24.106 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:24.106 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:24.106 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:24.106 Initialization complete. Launching workers. 00:14:24.106 Starting thread on core 2 00:14:24.106 Starting thread on core 3 00:14:24.106 Starting thread on core 1 00:14:24.106 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:24.106 [2024-11-20 09:48:00.871831] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.386 [2024-11-20 09:48:04.005799] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.386 Initializing NVMe Controllers 00:14:27.386 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.386 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.386 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:27.386 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:27.386 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:27.386 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:27.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:27.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:27.386 Initialization complete. Launching workers. 00:14:27.386 Starting thread on core 1 with urgent priority queue 00:14:27.386 Starting thread on core 2 with urgent priority queue 00:14:27.386 Starting thread on core 3 with urgent priority queue 00:14:27.386 Starting thread on core 0 with urgent priority queue 00:14:27.386 SPDK bdev Controller (SPDK2 ) core 0: 2712.00 IO/s 36.87 secs/100000 ios 00:14:27.386 SPDK bdev Controller (SPDK2 ) core 1: 2970.67 IO/s 33.66 secs/100000 ios 00:14:27.386 SPDK bdev Controller (SPDK2 ) core 2: 3057.33 IO/s 32.71 secs/100000 ios 00:14:27.386 SPDK bdev Controller (SPDK2 ) core 3: 3123.00 IO/s 32.02 secs/100000 ios 00:14:27.386 ======================================================== 00:14:27.386 00:14:27.386 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:27.643 [2024-11-20 09:48:04.331807] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.643 Initializing NVMe Controllers 00:14:27.644 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.644 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.644 Namespace ID: 1 size: 0GB 00:14:27.644 Initialization complete. 00:14:27.644 INFO: using host memory buffer for IO 00:14:27.644 Hello world! 00:14:27.644 [2024-11-20 09:48:04.342025] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.644 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:27.901 [2024-11-20 09:48:04.666114] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:29.274 Initializing NVMe Controllers 00:14:29.274 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:29.274 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:29.274 Initialization complete. Launching workers. 00:14:29.274 submit (in ns) avg, min, max = 8669.9, 3548.9, 4017814.4 00:14:29.274 complete (in ns) avg, min, max = 25163.4, 2064.4, 4018447.8 00:14:29.274 00:14:29.274 Submit histogram 00:14:29.274 ================ 00:14:29.274 Range in us Cumulative Count 00:14:29.274 3.532 - 3.556: 0.0464% ( 6) 00:14:29.274 3.556 - 3.579: 1.3469% ( 168) 00:14:29.274 3.579 - 3.603: 4.7759% ( 443) 00:14:29.274 3.603 - 3.627: 12.9267% ( 1053) 00:14:29.274 3.627 - 3.650: 23.5467% ( 1372) 00:14:29.274 3.650 - 3.674: 35.5445% ( 1550) 00:14:29.274 3.674 - 3.698: 43.8501% ( 1073) 00:14:29.274 3.698 - 3.721: 49.8026% ( 769) 00:14:29.274 3.721 - 3.745: 53.7658% ( 512) 00:14:29.274 3.745 - 3.769: 58.1160% ( 562) 00:14:29.274 3.769 - 3.793: 61.5528% ( 444) 00:14:29.274 3.793 - 3.816: 65.0747% ( 455) 00:14:29.274 3.816 - 3.840: 68.0548% ( 385) 00:14:29.274 3.840 - 3.864: 72.0102% ( 511) 00:14:29.274 3.864 - 3.887: 76.9951% ( 644) 00:14:29.274 3.887 - 3.911: 81.4227% ( 572) 00:14:29.274 3.911 - 3.935: 85.0453% ( 468) 00:14:29.274 3.935 - 3.959: 86.7482% ( 220) 00:14:29.274 3.959 - 3.982: 88.2731% ( 197) 00:14:29.274 3.982 - 4.006: 89.6199% ( 174) 00:14:29.274 4.006 - 4.030: 90.9978% ( 178) 00:14:29.274 4.030 - 4.053: 92.1047% ( 143) 00:14:29.274 4.053 - 4.077: 93.0954% ( 128) 00:14:29.274 4.077 - 4.101: 94.0166% ( 119) 00:14:29.274 4.101 - 4.124: 94.8061% ( 102) 00:14:29.274 4.124 - 4.148: 95.4873% ( 88) 00:14:29.274 4.148 - 4.172: 95.7737% ( 37) 00:14:29.274 4.172 - 4.196: 96.0446% ( 35) 00:14:29.274 4.196 - 4.219: 96.2691% ( 29) 00:14:29.274 4.219 - 4.243: 96.4626% ( 25) 00:14:29.274 4.243 - 4.267: 96.5942% ( 17) 00:14:29.274 4.267 - 4.290: 96.6871% ( 12) 00:14:29.274 4.290 - 4.314: 96.8032% ( 15) 00:14:29.274 4.314 - 4.338: 96.9038% ( 13) 00:14:29.274 4.338 - 4.361: 97.0122% ( 14) 00:14:29.274 4.361 - 4.385: 97.0663% ( 7) 00:14:29.274 4.409 - 4.433: 97.0973% ( 4) 00:14:29.274 4.433 - 4.456: 97.1128% ( 2) 00:14:29.274 4.456 - 4.480: 97.1283% ( 2) 00:14:29.274 4.480 - 4.504: 97.1592% ( 4) 00:14:29.274 4.504 - 4.527: 97.1670% ( 1) 00:14:29.274 4.527 - 4.551: 97.1747% ( 1) 00:14:29.274 4.551 - 4.575: 97.1824% ( 1) 00:14:29.274 4.670 - 4.693: 97.1979% ( 2) 00:14:29.274 4.693 - 4.717: 97.2057% ( 1) 00:14:29.274 4.717 - 4.741: 97.2134% ( 1) 00:14:29.274 4.741 - 4.764: 97.2753% ( 8) 00:14:29.274 4.764 - 4.788: 97.2831% ( 1) 00:14:29.274 4.788 - 4.812: 97.3527% ( 9) 00:14:29.274 4.812 - 4.836: 97.3992% ( 6) 00:14:29.274 4.836 - 4.859: 97.4456% ( 6) 00:14:29.274 4.859 - 4.883: 97.5308% ( 11) 00:14:29.274 4.883 - 4.907: 97.5772% ( 6) 00:14:29.274 4.907 - 4.930: 97.6546% ( 10) 00:14:29.274 4.930 - 4.954: 97.7088% ( 7) 00:14:29.274 4.954 - 4.978: 97.7552% ( 6) 00:14:29.274 5.001 - 5.025: 97.8017% ( 6) 00:14:29.274 5.025 - 5.049: 97.8326% ( 4) 00:14:29.274 5.049 - 5.073: 97.8714% ( 5) 00:14:29.274 5.073 - 5.096: 97.8946% ( 3) 00:14:29.274 5.096 - 5.120: 97.9255% ( 4) 00:14:29.274 5.120 - 5.144: 97.9565% ( 4) 00:14:29.274 5.144 - 5.167: 97.9952% ( 5) 00:14:29.274 5.191 - 5.215: 98.0029% ( 1) 00:14:29.274 5.215 - 5.239: 98.0262% ( 3) 00:14:29.274 5.239 - 5.262: 98.0339% ( 1) 00:14:29.274 5.286 - 5.310: 98.0416% ( 1) 00:14:29.274 5.381 - 5.404: 98.0494% ( 1) 00:14:29.274 5.404 - 5.428: 98.0571% ( 1) 00:14:29.274 5.428 - 5.452: 98.0649% ( 1) 00:14:29.274 5.594 - 5.618: 98.0726% ( 1) 00:14:29.274 5.618 - 5.641: 98.0803% ( 1) 00:14:29.274 5.641 - 5.665: 98.0881% ( 1) 00:14:29.274 5.831 - 5.855: 98.0958% ( 1) 00:14:29.274 6.044 - 6.068: 98.1036% ( 1) 00:14:29.274 6.116 - 6.163: 98.1113% ( 1) 00:14:29.274 6.258 - 6.305: 98.1190% ( 1) 00:14:29.274 6.542 - 6.590: 98.1345% ( 2) 00:14:29.274 6.874 - 6.921: 98.1423% ( 1) 00:14:29.274 7.064 - 7.111: 98.1500% ( 1) 00:14:29.274 7.206 - 7.253: 98.1578% ( 1) 00:14:29.274 7.443 - 7.490: 98.1655% ( 1) 00:14:29.274 7.633 - 7.680: 98.1732% ( 1) 00:14:29.274 7.727 - 7.775: 98.1810% ( 1) 00:14:29.274 7.964 - 8.012: 98.1887% ( 1) 00:14:29.274 8.012 - 8.059: 98.1965% ( 1) 00:14:29.274 8.059 - 8.107: 98.2042% ( 1) 00:14:29.274 8.296 - 8.344: 98.2119% ( 1) 00:14:29.274 8.344 - 8.391: 98.2197% ( 1) 00:14:29.274 8.533 - 8.581: 98.2352% ( 2) 00:14:29.274 8.723 - 8.770: 98.2429% ( 1) 00:14:29.274 8.818 - 8.865: 98.2661% ( 3) 00:14:29.274 8.960 - 9.007: 98.2816% ( 2) 00:14:29.274 9.007 - 9.055: 98.2893% ( 1) 00:14:29.274 9.102 - 9.150: 98.3126% ( 3) 00:14:29.274 9.150 - 9.197: 98.3203% ( 1) 00:14:29.274 9.197 - 9.244: 98.3280% ( 1) 00:14:29.274 9.244 - 9.292: 98.3358% ( 1) 00:14:29.274 9.292 - 9.339: 98.3435% ( 1) 00:14:29.274 9.387 - 9.434: 98.3513% ( 1) 00:14:29.274 9.434 - 9.481: 98.3590% ( 1) 00:14:29.274 9.481 - 9.529: 98.3667% ( 1) 00:14:29.274 9.576 - 9.624: 98.3745% ( 1) 00:14:29.274 9.671 - 9.719: 98.3822% ( 1) 00:14:29.274 9.813 - 9.861: 98.3900% ( 1) 00:14:29.274 9.861 - 9.908: 98.4054% ( 2) 00:14:29.274 9.908 - 9.956: 98.4132% ( 1) 00:14:29.274 9.956 - 10.003: 98.4209% ( 1) 00:14:29.274 10.003 - 10.050: 98.4364% ( 2) 00:14:29.274 10.098 - 10.145: 98.4442% ( 1) 00:14:29.274 10.145 - 10.193: 98.4596% ( 2) 00:14:29.274 10.287 - 10.335: 98.4751% ( 2) 00:14:29.274 10.335 - 10.382: 98.4829% ( 1) 00:14:29.274 10.430 - 10.477: 98.4906% ( 1) 00:14:29.274 10.477 - 10.524: 98.5061% ( 2) 00:14:29.274 10.572 - 10.619: 98.5138% ( 1) 00:14:29.274 10.714 - 10.761: 98.5216% ( 1) 00:14:29.274 10.809 - 10.856: 98.5293% ( 1) 00:14:29.274 11.330 - 11.378: 98.5370% ( 1) 00:14:29.274 11.473 - 11.520: 98.5448% ( 1) 00:14:29.274 11.567 - 11.615: 98.5525% ( 1) 00:14:29.274 11.710 - 11.757: 98.5603% ( 1) 00:14:29.274 11.757 - 11.804: 98.5680% ( 1) 00:14:29.274 11.804 - 11.852: 98.5835% ( 2) 00:14:29.274 12.041 - 12.089: 98.5912% ( 1) 00:14:29.274 12.136 - 12.231: 98.5990% ( 1) 00:14:29.274 12.231 - 12.326: 98.6144% ( 2) 00:14:29.274 12.705 - 12.800: 98.6222% ( 1) 00:14:29.274 12.800 - 12.895: 98.6299% ( 1) 00:14:29.274 12.895 - 12.990: 98.6377% ( 1) 00:14:29.274 12.990 - 13.084: 98.6454% ( 1) 00:14:29.274 13.179 - 13.274: 98.6531% ( 1) 00:14:29.274 13.369 - 13.464: 98.6609% ( 1) 00:14:29.274 13.653 - 13.748: 98.6686% ( 1) 00:14:29.274 13.748 - 13.843: 98.6764% ( 1) 00:14:29.274 13.938 - 14.033: 98.6841% ( 1) 00:14:29.274 14.127 - 14.222: 98.6918% ( 1) 00:14:29.274 14.317 - 14.412: 98.7073% ( 2) 00:14:29.274 14.696 - 14.791: 98.7151% ( 1) 00:14:29.274 15.076 - 15.170: 98.7228% ( 1) 00:14:29.274 15.265 - 15.360: 98.7306% ( 1) 00:14:29.274 16.972 - 17.067: 98.7460% ( 2) 00:14:29.274 17.161 - 17.256: 98.7538% ( 1) 00:14:29.274 17.256 - 17.351: 98.7693% ( 2) 00:14:29.274 17.351 - 17.446: 98.8002% ( 4) 00:14:29.274 17.446 - 17.541: 98.8312% ( 4) 00:14:29.274 17.541 - 17.636: 98.8621% ( 4) 00:14:29.275 17.636 - 17.730: 98.9318% ( 9) 00:14:29.275 17.730 - 17.825: 98.9937% ( 8) 00:14:29.275 17.825 - 17.920: 99.0557% ( 8) 00:14:29.275 17.920 - 18.015: 99.1098% ( 7) 00:14:29.275 18.015 - 18.110: 99.1950% ( 11) 00:14:29.275 18.110 - 18.204: 99.2569% ( 8) 00:14:29.275 18.204 - 18.299: 99.3111% ( 7) 00:14:29.275 18.299 - 18.394: 99.3653% ( 7) 00:14:29.275 18.394 - 18.489: 99.4736% ( 14) 00:14:29.275 18.489 - 18.584: 99.5278% ( 7) 00:14:29.275 18.584 - 18.679: 99.5510% ( 3) 00:14:29.275 18.679 - 18.773: 99.5820% ( 4) 00:14:29.275 18.773 - 18.868: 99.6207% ( 5) 00:14:29.275 18.868 - 18.963: 99.6439% ( 3) 00:14:29.275 18.963 - 19.058: 99.6594% ( 2) 00:14:29.275 19.058 - 19.153: 99.6749% ( 2) 00:14:29.275 19.153 - 19.247: 99.7059% ( 4) 00:14:29.275 19.247 - 19.342: 99.7136% ( 1) 00:14:29.275 19.721 - 19.816: 99.7213% ( 1) 00:14:29.275 20.480 - 20.575: 99.7291% ( 1) 00:14:29.275 20.670 - 20.764: 99.7368% ( 1) 00:14:29.275 22.281 - 22.376: 99.7446% ( 1) 00:14:29.275 22.945 - 23.040: 99.7523% ( 1) 00:14:29.275 23.419 - 23.514: 99.7600% ( 1) 00:14:29.275 23.609 - 23.704: 99.7678% ( 1) 00:14:29.275 23.704 - 23.799: 99.7755% ( 1) 00:14:29.275 23.799 - 23.893: 99.7833% ( 1) 00:14:29.275 24.083 - 24.178: 99.7910% ( 1) 00:14:29.275 24.462 - 24.652: 99.8065% ( 2) 00:14:29.275 24.652 - 24.841: 99.8142% ( 1) 00:14:29.275 24.841 - 25.031: 99.8220% ( 1) 00:14:29.275 25.031 - 25.221: 99.8297% ( 1) 00:14:29.275 25.410 - 25.600: 99.8374% ( 1) 00:14:29.275 25.600 - 25.790: 99.8452% ( 1) 00:14:29.275 26.359 - 26.548: 99.8529% ( 1) 00:14:29.275 27.117 - 27.307: 99.8607% ( 1) 00:14:29.275 29.203 - 29.393: 99.8684% ( 1) 00:14:29.275 31.099 - 31.289: 99.8762% ( 1) 00:14:29.275 344.367 - 345.884: 99.8839% ( 1) 00:14:29.275 3980.705 - 4004.978: 99.9536% ( 9) 00:14:29.275 4004.978 - 4029.250: 100.0000% ( 6) 00:14:29.275 00:14:29.275 Complete histogram 00:14:29.275 ================== 00:14:29.275 Range in us Cumulative Count 00:14:29.275 2.062 - 2.074: 8.2669% ( 1068) 00:14:29.275 2.074 - 2.086: 44.0746% ( 4626) 00:14:29.275 2.086 - 2.098: 46.8303% ( 356) 00:14:29.275 2.098 - 2.110: 52.7518% ( 765) 00:14:29.275 2.110 - 2.121: 59.5015% ( 872) 00:14:29.275 2.121 - 2.133: 60.7477% ( 161) 00:14:29.275 2.133 - 2.145: 68.7205% ( 1030) 00:14:29.275 2.145 - 2.157: 76.5075% ( 1006) 00:14:29.275 2.157 - 2.169: 77.2428% ( 95) 00:14:29.275 2.169 - 2.181: 79.6811% ( 315) 00:14:29.275 2.181 - 2.193: 81.5466% ( 241) 00:14:29.275 2.193 - 2.204: 82.1039% ( 72) 00:14:29.275 2.204 - 2.216: 85.1382% ( 392) 00:14:29.275 2.216 - 2.228: 89.1013% ( 512) 00:14:29.275 2.228 - 2.240: 91.0597% ( 253) 00:14:29.275 2.240 - 2.252: 92.5226% ( 189) 00:14:29.275 2.252 - 2.264: 93.2580% ( 95) 00:14:29.275 2.264 - 2.276: 93.5754% ( 41) 00:14:29.275 2.276 - 2.287: 93.8850% ( 40) 00:14:29.275 2.287 - 2.299: 94.4268% ( 70) 00:14:29.275 2.299 - 2.311: 94.9919% ( 73) 00:14:29.275 2.311 - 2.323: 95.3789% ( 50) 00:14:29.275 2.323 - 2.335: 95.4331% ( 7) 00:14:29.275 2.335 - 2.347: 95.4718% ( 5) 00:14:29.275 2.347 - 2.359: 95.5569% ( 11) 00:14:29.275 2.359 - 2.370: 95.6421% ( 11) 00:14:29.275 2.370 - 2.382: 95.8279% ( 24) 00:14:29.275 2.382 - 2.394: 96.3697% ( 70) 00:14:29.275 2.394 - 2.406: 96.6329% ( 34) 00:14:29.275 2.406 - 2.418: 96.8806% ( 32) 00:14:29.275 2.418 - 2.430: 97.0199% ( 18) 00:14:29.275 2.430 - 2.441: 97.2211% ( 26) 00:14:29.275 2.441 - 2.453: 97.4147% ( 25) 00:14:29.275 2.453 - 2.465: 97.6237% ( 27) 00:14:29.275 2.465 - 2.477: 97.7475% ( 16) 00:14:29.275 2.477 - 2.489: 97.8636% ( 15) 00:14:29.275 2.489 - 2.501: 98.0029% ( 18) 00:14:29.275 2.501 - 2.513: 98.0803% ( 10) 00:14:29.275 2.513 - 2.524: 98.1578% ( 10) 00:14:29.275 2.524 - 2.536: 98.2274% ( 9) 00:14:29.275 2.536 - 2.548: 98.2584% ( 4) 00:14:29.275 2.548 - 2.560: 98.2816% ( 3) 00:14:29.275 2.560 - 2.572: 98.3203% ( 5) 00:14:29.275 2.572 - 2.584: 98.3358% ( 2) 00:14:29.275 2.584 - 2.596: 98.3513% ( 2) 00:14:29.275 2.596 - 2.607: 98.3667% ( 2) 00:14:29.275 2.619 - 2.631: 98.3745% ( 1) 00:14:29.275 2.643 - 2.655: 98.3822% ( 1) 00:14:29.275 2.655 - 2.667: 98.3977% ( 2) 00:14:29.275 2.667 - 2.679: 98.4132% ( 2) 00:14:29.275 2.679 - 2.690: 98.4209% ( 1) 00:14:29.275 2.750 - 2.761: 98.4287% ( 1) 00:14:29.275 2.773 - 2.785: 98.4364% ( 1) 00:14:29.275 2.785 - 2.797: 98.4596% ( 3) 00:14:29.275 2.844 - 2.856: 98.4751% ( 2) 00:14:29.275 2.916 - 2.927: 98.4829% ( 1) 00:14:29.275 2.975 - 2.987: 98.4906% ( 1) 00:14:29.275 3.058 - 3.081: 98.4983% ( 1) 00:14:29.275 3.247 - 3.271: 98.5061% ( 1) 00:14:29.275 3.319 - 3.342: 98.5138% ( 1) 00:14:29.275 3.579 - 3.603: 98.5216% ( 1) 00:14:29.275 3.627 - 3.650: 98.5293% ( 1) 00:14:29.275 3.650 - 3.674: 98.5448% ( 2) 00:14:29.275 3.698 - 3.721: 9[2024-11-20 09:48:05.761177] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:29.275 8.5757% ( 4) 00:14:29.275 3.721 - 3.745: 98.5835% ( 1) 00:14:29.275 3.745 - 3.769: 98.5990% ( 2) 00:14:29.275 3.793 - 3.816: 98.6144% ( 2) 00:14:29.275 3.816 - 3.840: 98.6222% ( 1) 00:14:29.275 3.840 - 3.864: 98.6299% ( 1) 00:14:29.275 3.935 - 3.959: 98.6377% ( 1) 00:14:29.275 3.959 - 3.982: 98.6686% ( 4) 00:14:29.275 3.982 - 4.006: 98.6764% ( 1) 00:14:29.275 4.030 - 4.053: 98.6841% ( 1) 00:14:29.275 4.053 - 4.077: 98.6918% ( 1) 00:14:29.275 4.124 - 4.148: 98.6996% ( 1) 00:14:29.275 4.622 - 4.646: 98.7073% ( 1) 00:14:29.275 4.646 - 4.670: 98.7151% ( 1) 00:14:29.275 6.400 - 6.447: 98.7228% ( 1) 00:14:29.275 6.684 - 6.732: 98.7306% ( 1) 00:14:29.275 6.732 - 6.779: 98.7383% ( 1) 00:14:29.275 6.874 - 6.921: 98.7460% ( 1) 00:14:29.275 7.016 - 7.064: 98.7538% ( 1) 00:14:29.275 7.206 - 7.253: 98.7693% ( 2) 00:14:29.275 7.396 - 7.443: 98.7770% ( 1) 00:14:29.275 7.443 - 7.490: 98.7925% ( 2) 00:14:29.275 7.585 - 7.633: 98.8002% ( 1) 00:14:29.275 7.680 - 7.727: 98.8080% ( 1) 00:14:29.275 7.775 - 7.822: 98.8157% ( 1) 00:14:29.275 7.917 - 7.964: 98.8234% ( 1) 00:14:29.275 8.012 - 8.059: 98.8312% ( 1) 00:14:29.275 8.059 - 8.107: 98.8467% ( 2) 00:14:29.275 8.154 - 8.201: 98.8544% ( 1) 00:14:29.275 8.201 - 8.249: 98.8699% ( 2) 00:14:29.275 8.439 - 8.486: 98.8776% ( 1) 00:14:29.275 8.486 - 8.533: 98.8854% ( 1) 00:14:29.275 8.533 - 8.581: 98.8931% ( 1) 00:14:29.275 9.007 - 9.055: 98.9008% ( 1) 00:14:29.275 9.102 - 9.150: 98.9086% ( 1) 00:14:29.275 9.292 - 9.339: 98.9163% ( 1) 00:14:29.275 9.339 - 9.387: 98.9241% ( 1) 00:14:29.275 12.990 - 13.084: 98.9318% ( 1) 00:14:29.275 15.455 - 15.550: 98.9395% ( 1) 00:14:29.275 15.550 - 15.644: 98.9473% ( 1) 00:14:29.275 15.644 - 15.739: 98.9628% ( 2) 00:14:29.275 15.834 - 15.929: 98.9705% ( 1) 00:14:29.275 15.929 - 16.024: 98.9937% ( 3) 00:14:29.275 16.024 - 16.119: 99.0092% ( 2) 00:14:29.275 16.119 - 16.213: 99.0170% ( 1) 00:14:29.275 16.213 - 16.308: 99.0402% ( 3) 00:14:29.275 16.308 - 16.403: 99.0479% ( 1) 00:14:29.275 16.403 - 16.498: 99.0944% ( 6) 00:14:29.275 16.498 - 16.593: 99.1253% ( 4) 00:14:29.275 16.593 - 16.687: 99.1485% ( 3) 00:14:29.275 16.687 - 16.782: 99.1950% ( 6) 00:14:29.275 16.782 - 16.877: 99.2414% ( 6) 00:14:29.275 16.877 - 16.972: 99.2801% ( 5) 00:14:29.275 16.972 - 17.067: 99.2879% ( 1) 00:14:29.275 17.067 - 17.161: 99.3188% ( 4) 00:14:29.275 17.161 - 17.256: 99.3343% ( 2) 00:14:29.275 17.256 - 17.351: 99.3421% ( 1) 00:14:29.275 17.351 - 17.446: 99.3575% ( 2) 00:14:29.275 17.541 - 17.636: 99.3653% ( 1) 00:14:29.275 18.204 - 18.299: 99.3885% ( 3) 00:14:29.275 18.299 - 18.394: 99.3962% ( 1) 00:14:29.275 18.679 - 18.773: 99.4040% ( 1) 00:14:29.275 22.281 - 22.376: 99.4117% ( 1) 00:14:29.275 23.419 - 23.514: 99.4195% ( 1) 00:14:29.275 27.307 - 27.496: 99.4272% ( 1) 00:14:29.275 3980.705 - 4004.978: 99.7523% ( 42) 00:14:29.275 4004.978 - 4029.250: 100.0000% ( 32) 00:14:29.275 00:14:29.275 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:29.275 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:29.275 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:29.275 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:29.276 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:29.276 [ 00:14:29.276 { 00:14:29.276 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:29.276 "subtype": "Discovery", 00:14:29.276 "listen_addresses": [], 00:14:29.276 "allow_any_host": true, 00:14:29.276 "hosts": [] 00:14:29.276 }, 00:14:29.276 { 00:14:29.276 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:29.276 "subtype": "NVMe", 00:14:29.276 "listen_addresses": [ 00:14:29.276 { 00:14:29.276 "trtype": "VFIOUSER", 00:14:29.276 "adrfam": "IPv4", 00:14:29.276 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:29.276 "trsvcid": "0" 00:14:29.276 } 00:14:29.276 ], 00:14:29.276 "allow_any_host": true, 00:14:29.276 "hosts": [], 00:14:29.276 "serial_number": "SPDK1", 00:14:29.276 "model_number": "SPDK bdev Controller", 00:14:29.276 "max_namespaces": 32, 00:14:29.276 "min_cntlid": 1, 00:14:29.276 "max_cntlid": 65519, 00:14:29.276 "namespaces": [ 00:14:29.276 { 00:14:29.276 "nsid": 1, 00:14:29.276 "bdev_name": "Malloc1", 00:14:29.276 "name": "Malloc1", 00:14:29.276 "nguid": "13E6D9DF0C884BF59B07F400C1627759", 00:14:29.276 "uuid": "13e6d9df-0c88-4bf5-9b07-f400c1627759" 00:14:29.276 }, 00:14:29.276 { 00:14:29.276 "nsid": 2, 00:14:29.276 "bdev_name": "Malloc3", 00:14:29.276 "name": "Malloc3", 00:14:29.276 "nguid": "6D19AE7D7C0041DCB98F6148D5D7D5AA", 00:14:29.276 "uuid": "6d19ae7d-7c00-41dc-b98f-6148d5d7d5aa" 00:14:29.276 } 00:14:29.276 ] 00:14:29.276 }, 00:14:29.276 { 00:14:29.276 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:29.276 "subtype": "NVMe", 00:14:29.276 "listen_addresses": [ 00:14:29.276 { 00:14:29.276 "trtype": "VFIOUSER", 00:14:29.276 "adrfam": "IPv4", 00:14:29.276 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:29.276 "trsvcid": "0" 00:14:29.276 } 00:14:29.276 ], 00:14:29.276 "allow_any_host": true, 00:14:29.276 "hosts": [], 00:14:29.276 "serial_number": "SPDK2", 00:14:29.276 "model_number": "SPDK bdev Controller", 00:14:29.276 "max_namespaces": 32, 00:14:29.276 "min_cntlid": 1, 00:14:29.276 "max_cntlid": 65519, 00:14:29.276 "namespaces": [ 00:14:29.276 { 00:14:29.276 "nsid": 1, 00:14:29.276 "bdev_name": "Malloc2", 00:14:29.276 "name": "Malloc2", 00:14:29.276 "nguid": "DB46D4A59FC049368E7F08FA4966356D", 00:14:29.276 "uuid": "db46d4a5-9fc0-4936-8e7f-08fa4966356d" 00:14:29.276 } 00:14:29.276 ] 00:14:29.276 } 00:14:29.276 ] 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3716195 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:29.276 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:29.534 [2024-11-20 09:48:06.275836] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:29.534 Malloc4 00:14:29.534 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:29.792 [2024-11-20 09:48:06.693962] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:30.050 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:30.050 Asynchronous Event Request test 00:14:30.050 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:30.050 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:30.050 Registering asynchronous event callbacks... 00:14:30.050 Starting namespace attribute notice tests for all controllers... 00:14:30.050 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:30.050 aer_cb - Changed Namespace 00:14:30.050 Cleaning up... 00:14:30.307 [ 00:14:30.307 { 00:14:30.307 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:30.307 "subtype": "Discovery", 00:14:30.307 "listen_addresses": [], 00:14:30.307 "allow_any_host": true, 00:14:30.307 "hosts": [] 00:14:30.307 }, 00:14:30.307 { 00:14:30.307 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:30.307 "subtype": "NVMe", 00:14:30.307 "listen_addresses": [ 00:14:30.307 { 00:14:30.307 "trtype": "VFIOUSER", 00:14:30.307 "adrfam": "IPv4", 00:14:30.307 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:30.307 "trsvcid": "0" 00:14:30.307 } 00:14:30.307 ], 00:14:30.307 "allow_any_host": true, 00:14:30.307 "hosts": [], 00:14:30.307 "serial_number": "SPDK1", 00:14:30.307 "model_number": "SPDK bdev Controller", 00:14:30.307 "max_namespaces": 32, 00:14:30.307 "min_cntlid": 1, 00:14:30.307 "max_cntlid": 65519, 00:14:30.307 "namespaces": [ 00:14:30.307 { 00:14:30.307 "nsid": 1, 00:14:30.307 "bdev_name": "Malloc1", 00:14:30.307 "name": "Malloc1", 00:14:30.307 "nguid": "13E6D9DF0C884BF59B07F400C1627759", 00:14:30.307 "uuid": "13e6d9df-0c88-4bf5-9b07-f400c1627759" 00:14:30.307 }, 00:14:30.307 { 00:14:30.307 "nsid": 2, 00:14:30.307 "bdev_name": "Malloc3", 00:14:30.307 "name": "Malloc3", 00:14:30.307 "nguid": "6D19AE7D7C0041DCB98F6148D5D7D5AA", 00:14:30.307 "uuid": "6d19ae7d-7c00-41dc-b98f-6148d5d7d5aa" 00:14:30.307 } 00:14:30.307 ] 00:14:30.307 }, 00:14:30.307 { 00:14:30.307 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:30.307 "subtype": "NVMe", 00:14:30.307 "listen_addresses": [ 00:14:30.307 { 00:14:30.307 "trtype": "VFIOUSER", 00:14:30.307 "adrfam": "IPv4", 00:14:30.307 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:30.307 "trsvcid": "0" 00:14:30.307 } 00:14:30.307 ], 00:14:30.307 "allow_any_host": true, 00:14:30.307 "hosts": [], 00:14:30.307 "serial_number": "SPDK2", 00:14:30.308 "model_number": "SPDK bdev Controller", 00:14:30.308 "max_namespaces": 32, 00:14:30.308 "min_cntlid": 1, 00:14:30.308 "max_cntlid": 65519, 00:14:30.308 "namespaces": [ 00:14:30.308 { 00:14:30.308 "nsid": 1, 00:14:30.308 "bdev_name": "Malloc2", 00:14:30.308 "name": "Malloc2", 00:14:30.308 "nguid": "DB46D4A59FC049368E7F08FA4966356D", 00:14:30.308 "uuid": "db46d4a5-9fc0-4936-8e7f-08fa4966356d" 00:14:30.308 }, 00:14:30.308 { 00:14:30.308 "nsid": 2, 00:14:30.308 "bdev_name": "Malloc4", 00:14:30.308 "name": "Malloc4", 00:14:30.308 "nguid": "E20EB84058F54465B37094248019C0D4", 00:14:30.308 "uuid": "e20eb840-58f5-4465-b370-94248019c0d4" 00:14:30.308 } 00:14:30.308 ] 00:14:30.308 } 00:14:30.308 ] 00:14:30.308 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3716195 00:14:30.308 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:30.308 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3710347 00:14:30.308 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3710347 ']' 00:14:30.308 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3710347 00:14:30.308 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:30.308 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.308 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3710347 00:14:30.308 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.308 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.308 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3710347' 00:14:30.308 killing process with pid 3710347 00:14:30.308 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3710347 00:14:30.308 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3710347 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3716340 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3716340' 00:14:30.566 Process pid: 3716340 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3716340 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3716340 ']' 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.566 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:30.566 [2024-11-20 09:48:07.401595] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:30.566 [2024-11-20 09:48:07.402706] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:14:30.566 [2024-11-20 09:48:07.402769] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.566 [2024-11-20 09:48:07.470500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.825 [2024-11-20 09:48:07.532642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.825 [2024-11-20 09:48:07.532687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.825 [2024-11-20 09:48:07.532717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.825 [2024-11-20 09:48:07.532728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.825 [2024-11-20 09:48:07.532739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.825 [2024-11-20 09:48:07.534423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.825 [2024-11-20 09:48:07.534459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.825 [2024-11-20 09:48:07.534483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.825 [2024-11-20 09:48:07.534486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.825 [2024-11-20 09:48:07.632029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:30.825 [2024-11-20 09:48:07.632547] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:30.825 [2024-11-20 09:48:07.632774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:30.825 [2024-11-20 09:48:07.633182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:30.825 [2024-11-20 09:48:07.633470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:30.825 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.825 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:30.825 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:31.760 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:32.326 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:32.326 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:32.326 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:32.326 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:32.326 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:32.584 Malloc1 00:14:32.584 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:32.842 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:33.099 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:33.357 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:33.357 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:33.357 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:33.615 Malloc2 00:14:33.615 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:33.872 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:34.130 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3716340 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3716340 ']' 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3716340 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3716340 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3716340' 00:14:34.389 killing process with pid 3716340 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3716340 00:14:34.389 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3716340 00:14:34.647 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:34.647 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:34.647 00:14:34.647 real 0m54.321s 00:14:34.647 user 3m30.194s 00:14:34.647 sys 0m4.026s 00:14:34.647 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.647 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:34.647 ************************************ 00:14:34.647 END TEST nvmf_vfio_user 00:14:34.647 ************************************ 00:14:34.647 09:48:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:34.647 09:48:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.647 09:48:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.647 09:48:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.907 ************************************ 00:14:34.907 START TEST nvmf_vfio_user_nvme_compliance 00:14:34.907 ************************************ 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:34.907 * Looking for test storage... 00:14:34.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.907 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:34.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.908 --rc genhtml_branch_coverage=1 00:14:34.908 --rc genhtml_function_coverage=1 00:14:34.908 --rc genhtml_legend=1 00:14:34.908 --rc geninfo_all_blocks=1 00:14:34.908 --rc geninfo_unexecuted_blocks=1 00:14:34.908 00:14:34.908 ' 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:34.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.908 --rc genhtml_branch_coverage=1 00:14:34.908 --rc genhtml_function_coverage=1 00:14:34.908 --rc genhtml_legend=1 00:14:34.908 --rc geninfo_all_blocks=1 00:14:34.908 --rc geninfo_unexecuted_blocks=1 00:14:34.908 00:14:34.908 ' 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:34.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.908 --rc genhtml_branch_coverage=1 00:14:34.908 --rc genhtml_function_coverage=1 00:14:34.908 --rc genhtml_legend=1 00:14:34.908 --rc geninfo_all_blocks=1 00:14:34.908 --rc geninfo_unexecuted_blocks=1 00:14:34.908 00:14:34.908 ' 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:34.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.908 --rc genhtml_branch_coverage=1 00:14:34.908 --rc genhtml_function_coverage=1 00:14:34.908 --rc genhtml_legend=1 00:14:34.908 --rc geninfo_all_blocks=1 00:14:34.908 --rc geninfo_unexecuted_blocks=1 00:14:34.908 00:14:34.908 ' 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:34.908 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3717454 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3717454' 00:14:34.909 Process pid: 3717454 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3717454 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3717454 ']' 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.909 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.909 [2024-11-20 09:48:11.772946] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:14:34.909 [2024-11-20 09:48:11.773035] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.168 [2024-11-20 09:48:11.838497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:35.168 [2024-11-20 09:48:11.895358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.168 [2024-11-20 09:48:11.895409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.168 [2024-11-20 09:48:11.895436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.168 [2024-11-20 09:48:11.895447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.168 [2024-11-20 09:48:11.895457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.168 [2024-11-20 09:48:11.896808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.168 [2024-11-20 09:48:11.896877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.168 [2024-11-20 09:48:11.896874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.168 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.168 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:35.168 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:36.102 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:36.102 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:36.102 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:36.102 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.102 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:36.361 malloc0 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.361 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:36.361 00:14:36.361 00:14:36.361 CUnit - A unit testing framework for C - Version 2.1-3 00:14:36.361 http://cunit.sourceforge.net/ 00:14:36.361 00:14:36.361 00:14:36.361 Suite: nvme_compliance 00:14:36.361 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 09:48:13.249803] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.362 [2024-11-20 09:48:13.251259] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:36.362 [2024-11-20 09:48:13.251298] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:36.362 [2024-11-20 09:48:13.251321] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:36.362 [2024-11-20 09:48:13.252820] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.620 passed 00:14:36.620 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 09:48:13.337417] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.620 [2024-11-20 09:48:13.340438] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.620 passed 00:14:36.620 Test: admin_identify_ns ...[2024-11-20 09:48:13.426702] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.620 [2024-11-20 09:48:13.486339] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:36.620 [2024-11-20 09:48:13.494333] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:36.620 [2024-11-20 09:48:13.515464] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.878 passed 00:14:36.879 Test: admin_get_features_mandatory_features ...[2024-11-20 09:48:13.599035] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.879 [2024-11-20 09:48:13.602053] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.879 passed 00:14:36.879 Test: admin_get_features_optional_features ...[2024-11-20 09:48:13.686647] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.879 [2024-11-20 09:48:13.689653] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.879 passed 00:14:36.879 Test: admin_set_features_number_of_queues ...[2024-11-20 09:48:13.772747] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.136 [2024-11-20 09:48:13.877436] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.136 passed 00:14:37.136 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 09:48:13.963534] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.136 [2024-11-20 09:48:13.966552] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.136 passed 00:14:37.136 Test: admin_get_log_page_with_lpo ...[2024-11-20 09:48:14.045678] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.394 [2024-11-20 09:48:14.117321] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:37.394 [2024-11-20 09:48:14.130381] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.394 passed 00:14:37.394 Test: fabric_property_get ...[2024-11-20 09:48:14.212852] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.394 [2024-11-20 09:48:14.214129] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:37.394 [2024-11-20 09:48:14.215872] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.394 passed 00:14:37.394 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 09:48:14.298416] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.394 [2024-11-20 09:48:14.299743] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:37.394 [2024-11-20 09:48:14.301435] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.652 passed 00:14:37.652 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 09:48:14.384558] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.652 [2024-11-20 09:48:14.468312] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:37.652 [2024-11-20 09:48:14.484311] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:37.652 [2024-11-20 09:48:14.489415] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.652 passed 00:14:37.910 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 09:48:14.572520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.910 [2024-11-20 09:48:14.573834] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:37.910 [2024-11-20 09:48:14.575548] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.910 passed 00:14:37.910 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 09:48:14.660664] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.910 [2024-11-20 09:48:14.736327] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:37.910 [2024-11-20 09:48:14.760316] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:37.910 [2024-11-20 09:48:14.765426] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.910 passed 00:14:38.168 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 09:48:14.848991] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:38.168 [2024-11-20 09:48:14.850334] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:38.168 [2024-11-20 09:48:14.850375] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:38.168 [2024-11-20 09:48:14.852019] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:38.168 passed 00:14:38.168 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 09:48:14.937365] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:38.168 [2024-11-20 09:48:15.027311] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:38.168 [2024-11-20 09:48:15.035309] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:38.168 [2024-11-20 09:48:15.043314] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:38.168 [2024-11-20 09:48:15.051326] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:38.426 [2024-11-20 09:48:15.084448] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:38.426 passed 00:14:38.426 Test: admin_create_io_sq_verify_pc ...[2024-11-20 09:48:15.163994] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:38.426 [2024-11-20 09:48:15.180325] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:38.426 [2024-11-20 09:48:15.198388] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:38.426 passed 00:14:38.426 Test: admin_create_io_qp_max_qps ...[2024-11-20 09:48:15.282950] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:39.799 [2024-11-20 09:48:16.387323] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:40.057 [2024-11-20 09:48:16.770895] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.057 passed 00:14:40.057 Test: admin_create_io_sq_shared_cq ...[2024-11-20 09:48:16.854217] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.315 [2024-11-20 09:48:16.987325] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:40.315 [2024-11-20 09:48:17.024406] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.315 passed 00:14:40.315 00:14:40.315 Run Summary: Type Total Ran Passed Failed Inactive 00:14:40.315 suites 1 1 n/a 0 0 00:14:40.315 tests 18 18 18 0 0 00:14:40.315 asserts 360 360 360 0 n/a 00:14:40.315 00:14:40.315 Elapsed time = 1.566 seconds 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3717454 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3717454 ']' 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3717454 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717454 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717454' 00:14:40.315 killing process with pid 3717454 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3717454 00:14:40.315 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3717454 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:40.573 00:14:40.573 real 0m5.801s 00:14:40.573 user 0m16.281s 00:14:40.573 sys 0m0.552s 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.573 ************************************ 00:14:40.573 END TEST nvmf_vfio_user_nvme_compliance 00:14:40.573 ************************************ 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.573 ************************************ 00:14:40.573 START TEST nvmf_vfio_user_fuzz 00:14:40.573 ************************************ 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:40.573 * Looking for test storage... 00:14:40.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:40.573 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:40.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.833 --rc genhtml_branch_coverage=1 00:14:40.833 --rc genhtml_function_coverage=1 00:14:40.833 --rc genhtml_legend=1 00:14:40.833 --rc geninfo_all_blocks=1 00:14:40.833 --rc geninfo_unexecuted_blocks=1 00:14:40.833 00:14:40.833 ' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:40.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.833 --rc genhtml_branch_coverage=1 00:14:40.833 --rc genhtml_function_coverage=1 00:14:40.833 --rc genhtml_legend=1 00:14:40.833 --rc geninfo_all_blocks=1 00:14:40.833 --rc geninfo_unexecuted_blocks=1 00:14:40.833 00:14:40.833 ' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:40.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.833 --rc genhtml_branch_coverage=1 00:14:40.833 --rc genhtml_function_coverage=1 00:14:40.833 --rc genhtml_legend=1 00:14:40.833 --rc geninfo_all_blocks=1 00:14:40.833 --rc geninfo_unexecuted_blocks=1 00:14:40.833 00:14:40.833 ' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:40.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.833 --rc genhtml_branch_coverage=1 00:14:40.833 --rc genhtml_function_coverage=1 00:14:40.833 --rc genhtml_legend=1 00:14:40.833 --rc geninfo_all_blocks=1 00:14:40.833 --rc geninfo_unexecuted_blocks=1 00:14:40.833 00:14:40.833 ' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.833 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3718181 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3718181' 00:14:40.834 Process pid: 3718181 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3718181 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3718181 ']' 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.834 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.092 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.092 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:41.092 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:42.026 malloc0 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:42.026 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:14.094 Fuzzing completed. Shutting down the fuzz application 00:15:14.094 00:15:14.094 Dumping successful admin opcodes: 00:15:14.094 8, 9, 10, 24, 00:15:14.094 Dumping successful io opcodes: 00:15:14.094 0, 00:15:14.094 NS: 0x20000081ef00 I/O qp, Total commands completed: 686805, total successful commands: 2677, random_seed: 3155498368 00:15:14.094 NS: 0x20000081ef00 admin qp, Total commands completed: 168728, total successful commands: 1375, random_seed: 4284576896 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3718181 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3718181 ']' 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3718181 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3718181 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3718181' 00:15:14.094 killing process with pid 3718181 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3718181 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3718181 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:14.094 00:15:14.094 real 0m32.240s 00:15:14.094 user 0m34.202s 00:15:14.094 sys 0m26.868s 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.094 ************************************ 00:15:14.094 END TEST nvmf_vfio_user_fuzz 00:15:14.094 ************************************ 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.094 ************************************ 00:15:14.094 START TEST nvmf_auth_target 00:15:14.094 ************************************ 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:14.094 * Looking for test storage... 00:15:14.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:14.094 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:14.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.095 --rc genhtml_branch_coverage=1 00:15:14.095 --rc genhtml_function_coverage=1 00:15:14.095 --rc genhtml_legend=1 00:15:14.095 --rc geninfo_all_blocks=1 00:15:14.095 --rc geninfo_unexecuted_blocks=1 00:15:14.095 00:15:14.095 ' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:14.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.095 --rc genhtml_branch_coverage=1 00:15:14.095 --rc genhtml_function_coverage=1 00:15:14.095 --rc genhtml_legend=1 00:15:14.095 --rc geninfo_all_blocks=1 00:15:14.095 --rc geninfo_unexecuted_blocks=1 00:15:14.095 00:15:14.095 ' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:14.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.095 --rc genhtml_branch_coverage=1 00:15:14.095 --rc genhtml_function_coverage=1 00:15:14.095 --rc genhtml_legend=1 00:15:14.095 --rc geninfo_all_blocks=1 00:15:14.095 --rc geninfo_unexecuted_blocks=1 00:15:14.095 00:15:14.095 ' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:14.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.095 --rc genhtml_branch_coverage=1 00:15:14.095 --rc genhtml_function_coverage=1 00:15:14.095 --rc genhtml_legend=1 00:15:14.095 --rc geninfo_all_blocks=1 00:15:14.095 --rc geninfo_unexecuted_blocks=1 00:15:14.095 00:15:14.095 ' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:14.095 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:14.096 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:15.473 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:15.473 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:15.473 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:15.473 Found net devices under 0000:09:00.0: cvl_0_0 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.473 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:15.473 Found net devices under 0000:09:00.1: cvl_0_1 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:15.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:15:15.474 00:15:15.474 --- 10.0.0.2 ping statistics --- 00:15:15.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.474 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:15:15.474 00:15:15.474 --- 10.0.0.1 ping statistics --- 00:15:15.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.474 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3723634 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3723634 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3723634 ']' 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.474 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3723655 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=db198a537bc293714b1903fbe5b0686dbd37488190b73ce0 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ob2 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key db198a537bc293714b1903fbe5b0686dbd37488190b73ce0 0 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 db198a537bc293714b1903fbe5b0686dbd37488190b73ce0 0 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=db198a537bc293714b1903fbe5b0686dbd37488190b73ce0 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ob2 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ob2 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.ob2 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1b65ac27d12a470f5075cf3432c57e703dd0ebeb62fff05028521df4505f71ee 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ZeY 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1b65ac27d12a470f5075cf3432c57e703dd0ebeb62fff05028521df4505f71ee 3 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1b65ac27d12a470f5075cf3432c57e703dd0ebeb62fff05028521df4505f71ee 3 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1b65ac27d12a470f5075cf3432c57e703dd0ebeb62fff05028521df4505f71ee 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ZeY 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ZeY 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ZeY 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=600304194669ff5fbec9e6b35cceae3c 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RmI 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 600304194669ff5fbec9e6b35cceae3c 1 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 600304194669ff5fbec9e6b35cceae3c 1 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=600304194669ff5fbec9e6b35cceae3c 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RmI 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RmI 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.RmI 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7b48c058a95ad1fd41434ee7be960dc8e63b6a5795c60d37 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BDt 00:15:15.733 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7b48c058a95ad1fd41434ee7be960dc8e63b6a5795c60d37 2 00:15:15.734 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7b48c058a95ad1fd41434ee7be960dc8e63b6a5795c60d37 2 00:15:15.734 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:15.734 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:15.734 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7b48c058a95ad1fd41434ee7be960dc8e63b6a5795c60d37 00:15:15.734 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:15.734 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BDt 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BDt 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.BDt 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f60964c1a4bb5345b22716d89e3060ffe4faf53767854ae2 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kd8 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f60964c1a4bb5345b22716d89e3060ffe4faf53767854ae2 2 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f60964c1a4bb5345b22716d89e3060ffe4faf53767854ae2 2 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f60964c1a4bb5345b22716d89e3060ffe4faf53767854ae2 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kd8 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kd8 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.kd8 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ca06aaa4be3465ef959609a94eb07be2 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7r0 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ca06aaa4be3465ef959609a94eb07be2 1 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ca06aaa4be3465ef959609a94eb07be2 1 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ca06aaa4be3465ef959609a94eb07be2 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7r0 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7r0 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.7r0 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bbc8c03e41a7fb63a8d64c948c25a0ccb21a70b1b3b65dbac3bfc81630dcd309 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.amK 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bbc8c03e41a7fb63a8d64c948c25a0ccb21a70b1b3b65dbac3bfc81630dcd309 3 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bbc8c03e41a7fb63a8d64c948c25a0ccb21a70b1b3b65dbac3bfc81630dcd309 3 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bbc8c03e41a7fb63a8d64c948c25a0ccb21a70b1b3b65dbac3bfc81630dcd309 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.amK 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.amK 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.amK 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3723634 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3723634 ']' 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.992 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3723655 /var/tmp/host.sock 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3723655 ']' 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:16.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.251 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ob2 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ob2 00:15:16.508 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ob2 00:15:16.766 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ZeY ]] 00:15:16.766 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZeY 00:15:16.766 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.766 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.766 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.766 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZeY 00:15:16.766 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZeY 00:15:17.025 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:17.025 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RmI 00:15:17.025 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.025 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.283 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.283 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.RmI 00:15:17.283 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.RmI 00:15:17.541 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.BDt ]] 00:15:17.541 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BDt 00:15:17.541 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.541 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.541 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.541 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BDt 00:15:17.541 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BDt 00:15:17.799 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:17.799 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kd8 00:15:17.799 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.799 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.799 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.799 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kd8 00:15:17.799 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kd8 00:15:18.058 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.7r0 ]] 00:15:18.058 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7r0 00:15:18.058 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.058 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.058 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.058 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7r0 00:15:18.058 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7r0 00:15:18.342 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:18.342 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.amK 00:15:18.342 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.342 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.342 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.342 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.amK 00:15:18.342 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.amK 00:15:18.600 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:18.600 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:18.600 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.600 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.600 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:18.600 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.859 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.118 00:15:19.118 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.118 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.118 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.376 { 00:15:19.376 "cntlid": 1, 00:15:19.376 "qid": 0, 00:15:19.376 "state": "enabled", 00:15:19.376 "thread": "nvmf_tgt_poll_group_000", 00:15:19.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:19.376 "listen_address": { 00:15:19.376 "trtype": "TCP", 00:15:19.376 "adrfam": "IPv4", 00:15:19.376 "traddr": "10.0.0.2", 00:15:19.376 "trsvcid": "4420" 00:15:19.376 }, 00:15:19.376 "peer_address": { 00:15:19.376 "trtype": "TCP", 00:15:19.376 "adrfam": "IPv4", 00:15:19.376 "traddr": "10.0.0.1", 00:15:19.376 "trsvcid": "41044" 00:15:19.376 }, 00:15:19.376 "auth": { 00:15:19.376 "state": "completed", 00:15:19.376 "digest": "sha256", 00:15:19.376 "dhgroup": "null" 00:15:19.376 } 00:15:19.376 } 00:15:19.376 ]' 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.376 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.635 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:19.635 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.635 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.635 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.635 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.892 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:19.892 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:20.825 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.825 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:20.825 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.825 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.825 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.825 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.825 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:20.825 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.083 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.340 00:15:21.340 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.340 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.340 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.598 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.598 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.598 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.598 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.598 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.598 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.598 { 00:15:21.598 "cntlid": 3, 00:15:21.598 "qid": 0, 00:15:21.598 "state": "enabled", 00:15:21.598 "thread": "nvmf_tgt_poll_group_000", 00:15:21.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:21.598 "listen_address": { 00:15:21.598 "trtype": "TCP", 00:15:21.598 "adrfam": "IPv4", 00:15:21.598 "traddr": "10.0.0.2", 00:15:21.598 "trsvcid": "4420" 00:15:21.598 }, 00:15:21.598 "peer_address": { 00:15:21.598 "trtype": "TCP", 00:15:21.598 "adrfam": "IPv4", 00:15:21.598 "traddr": "10.0.0.1", 00:15:21.598 "trsvcid": "37922" 00:15:21.598 }, 00:15:21.598 "auth": { 00:15:21.598 "state": "completed", 00:15:21.598 "digest": "sha256", 00:15:21.598 "dhgroup": "null" 00:15:21.598 } 00:15:21.598 } 00:15:21.598 ]' 00:15:21.856 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.856 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.856 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.856 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:21.856 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.856 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.856 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.856 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.112 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:22.112 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:23.044 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.044 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.044 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.044 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.044 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.044 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.044 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:23.044 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.301 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.558 00:15:23.558 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.558 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.558 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.816 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.816 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.816 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.816 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.816 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.816 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.816 { 00:15:23.816 "cntlid": 5, 00:15:23.816 "qid": 0, 00:15:23.816 "state": "enabled", 00:15:23.816 "thread": "nvmf_tgt_poll_group_000", 00:15:23.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:23.816 "listen_address": { 00:15:23.816 "trtype": "TCP", 00:15:23.816 "adrfam": "IPv4", 00:15:23.816 "traddr": "10.0.0.2", 00:15:23.816 "trsvcid": "4420" 00:15:23.816 }, 00:15:23.816 "peer_address": { 00:15:23.816 "trtype": "TCP", 00:15:23.816 "adrfam": "IPv4", 00:15:23.816 "traddr": "10.0.0.1", 00:15:23.816 "trsvcid": "37932" 00:15:23.816 }, 00:15:23.816 "auth": { 00:15:23.816 "state": "completed", 00:15:23.816 "digest": "sha256", 00:15:23.816 "dhgroup": "null" 00:15:23.816 } 00:15:23.816 } 00:15:23.816 ]' 00:15:23.816 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.074 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.074 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.074 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:24.074 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.074 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.074 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.074 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.332 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:15:24.333 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:15:25.268 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.268 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:25.268 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.268 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.268 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.268 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.268 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:25.268 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.526 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.091 00:15:26.091 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.091 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.091 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.349 { 00:15:26.349 "cntlid": 7, 00:15:26.349 "qid": 0, 00:15:26.349 "state": "enabled", 00:15:26.349 "thread": "nvmf_tgt_poll_group_000", 00:15:26.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:26.349 "listen_address": { 00:15:26.349 "trtype": "TCP", 00:15:26.349 "adrfam": "IPv4", 00:15:26.349 "traddr": "10.0.0.2", 00:15:26.349 "trsvcid": "4420" 00:15:26.349 }, 00:15:26.349 "peer_address": { 00:15:26.349 "trtype": "TCP", 00:15:26.349 "adrfam": "IPv4", 00:15:26.349 "traddr": "10.0.0.1", 00:15:26.349 "trsvcid": "37954" 00:15:26.349 }, 00:15:26.349 "auth": { 00:15:26.349 "state": "completed", 00:15:26.349 "digest": "sha256", 00:15:26.349 "dhgroup": "null" 00:15:26.349 } 00:15:26.349 } 00:15:26.349 ]' 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.349 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.607 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:15:26.607 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.544 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.802 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.061 00:15:28.319 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.319 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.319 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.577 { 00:15:28.577 "cntlid": 9, 00:15:28.577 "qid": 0, 00:15:28.577 "state": "enabled", 00:15:28.577 "thread": "nvmf_tgt_poll_group_000", 00:15:28.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:28.577 "listen_address": { 00:15:28.577 "trtype": "TCP", 00:15:28.577 "adrfam": "IPv4", 00:15:28.577 "traddr": "10.0.0.2", 00:15:28.577 "trsvcid": "4420" 00:15:28.577 }, 00:15:28.577 "peer_address": { 00:15:28.577 "trtype": "TCP", 00:15:28.577 "adrfam": "IPv4", 00:15:28.577 "traddr": "10.0.0.1", 00:15:28.577 "trsvcid": "37968" 00:15:28.577 }, 00:15:28.577 "auth": { 00:15:28.577 "state": "completed", 00:15:28.577 "digest": "sha256", 00:15:28.577 "dhgroup": "ffdhe2048" 00:15:28.577 } 00:15:28.577 } 00:15:28.577 ]' 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.577 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.835 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:28.835 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:29.802 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.802 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:29.802 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.802 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.802 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.802 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.802 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:29.802 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.088 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.359 00:15:30.359 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.359 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.359 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.624 { 00:15:30.624 "cntlid": 11, 00:15:30.624 "qid": 0, 00:15:30.624 "state": "enabled", 00:15:30.624 "thread": "nvmf_tgt_poll_group_000", 00:15:30.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:30.624 "listen_address": { 00:15:30.624 "trtype": "TCP", 00:15:30.624 "adrfam": "IPv4", 00:15:30.624 "traddr": "10.0.0.2", 00:15:30.624 "trsvcid": "4420" 00:15:30.624 }, 00:15:30.624 "peer_address": { 00:15:30.624 "trtype": "TCP", 00:15:30.624 "adrfam": "IPv4", 00:15:30.624 "traddr": "10.0.0.1", 00:15:30.624 "trsvcid": "39824" 00:15:30.624 }, 00:15:30.624 "auth": { 00:15:30.624 "state": "completed", 00:15:30.624 "digest": "sha256", 00:15:30.624 "dhgroup": "ffdhe2048" 00:15:30.624 } 00:15:30.624 } 00:15:30.624 ]' 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.624 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.882 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:30.882 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.882 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.882 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.882 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.140 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:31.140 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:32.074 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.074 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.074 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.074 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.074 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.074 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.074 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:32.074 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.333 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.591 00:15:32.591 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.591 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.591 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.849 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.849 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.849 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.849 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.107 { 00:15:33.107 "cntlid": 13, 00:15:33.107 "qid": 0, 00:15:33.107 "state": "enabled", 00:15:33.107 "thread": "nvmf_tgt_poll_group_000", 00:15:33.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:33.107 "listen_address": { 00:15:33.107 "trtype": "TCP", 00:15:33.107 "adrfam": "IPv4", 00:15:33.107 "traddr": "10.0.0.2", 00:15:33.107 "trsvcid": "4420" 00:15:33.107 }, 00:15:33.107 "peer_address": { 00:15:33.107 "trtype": "TCP", 00:15:33.107 "adrfam": "IPv4", 00:15:33.107 "traddr": "10.0.0.1", 00:15:33.107 "trsvcid": "39846" 00:15:33.107 }, 00:15:33.107 "auth": { 00:15:33.107 "state": "completed", 00:15:33.107 "digest": "sha256", 00:15:33.107 "dhgroup": "ffdhe2048" 00:15:33.107 } 00:15:33.107 } 00:15:33.107 ]' 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.107 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.365 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:15:33.365 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:15:34.299 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.299 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:34.299 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.299 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.299 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.299 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.299 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:34.299 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:34.557 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.558 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.124 00:15:35.124 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.124 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.124 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.383 { 00:15:35.383 "cntlid": 15, 00:15:35.383 "qid": 0, 00:15:35.383 "state": "enabled", 00:15:35.383 "thread": "nvmf_tgt_poll_group_000", 00:15:35.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:35.383 "listen_address": { 00:15:35.383 "trtype": "TCP", 00:15:35.383 "adrfam": "IPv4", 00:15:35.383 "traddr": "10.0.0.2", 00:15:35.383 "trsvcid": "4420" 00:15:35.383 }, 00:15:35.383 "peer_address": { 00:15:35.383 "trtype": "TCP", 00:15:35.383 "adrfam": "IPv4", 00:15:35.383 "traddr": "10.0.0.1", 00:15:35.383 "trsvcid": "39878" 00:15:35.383 }, 00:15:35.383 "auth": { 00:15:35.383 "state": "completed", 00:15:35.383 "digest": "sha256", 00:15:35.383 "dhgroup": "ffdhe2048" 00:15:35.383 } 00:15:35.383 } 00:15:35.383 ]' 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.383 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.641 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:15:35.641 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:36.575 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.834 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.399 00:15:37.399 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.399 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.399 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.656 { 00:15:37.656 "cntlid": 17, 00:15:37.656 "qid": 0, 00:15:37.656 "state": "enabled", 00:15:37.656 "thread": "nvmf_tgt_poll_group_000", 00:15:37.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:37.656 "listen_address": { 00:15:37.656 "trtype": "TCP", 00:15:37.656 "adrfam": "IPv4", 00:15:37.656 "traddr": "10.0.0.2", 00:15:37.656 "trsvcid": "4420" 00:15:37.656 }, 00:15:37.656 "peer_address": { 00:15:37.656 "trtype": "TCP", 00:15:37.656 "adrfam": "IPv4", 00:15:37.656 "traddr": "10.0.0.1", 00:15:37.656 "trsvcid": "39912" 00:15:37.656 }, 00:15:37.656 "auth": { 00:15:37.656 "state": "completed", 00:15:37.656 "digest": "sha256", 00:15:37.656 "dhgroup": "ffdhe3072" 00:15:37.656 } 00:15:37.656 } 00:15:37.656 ]' 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.656 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.913 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:37.913 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:38.843 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.843 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.843 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.843 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.843 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.843 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.843 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:38.843 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.100 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.666 00:15:39.666 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.666 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.666 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.923 { 00:15:39.923 "cntlid": 19, 00:15:39.923 "qid": 0, 00:15:39.923 "state": "enabled", 00:15:39.923 "thread": "nvmf_tgt_poll_group_000", 00:15:39.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:39.923 "listen_address": { 00:15:39.923 "trtype": "TCP", 00:15:39.923 "adrfam": "IPv4", 00:15:39.923 "traddr": "10.0.0.2", 00:15:39.923 "trsvcid": "4420" 00:15:39.923 }, 00:15:39.923 "peer_address": { 00:15:39.923 "trtype": "TCP", 00:15:39.923 "adrfam": "IPv4", 00:15:39.923 "traddr": "10.0.0.1", 00:15:39.923 "trsvcid": "48542" 00:15:39.923 }, 00:15:39.923 "auth": { 00:15:39.923 "state": "completed", 00:15:39.923 "digest": "sha256", 00:15:39.923 "dhgroup": "ffdhe3072" 00:15:39.923 } 00:15:39.923 } 00:15:39.923 ]' 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.923 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.180 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:40.180 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:41.113 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.113 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.113 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.113 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.113 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.113 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.113 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:41.113 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.371 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.936 00:15:41.936 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.936 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.936 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.194 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.194 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.194 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.194 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.194 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.194 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.194 { 00:15:42.194 "cntlid": 21, 00:15:42.194 "qid": 0, 00:15:42.194 "state": "enabled", 00:15:42.194 "thread": "nvmf_tgt_poll_group_000", 00:15:42.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:42.194 "listen_address": { 00:15:42.194 "trtype": "TCP", 00:15:42.194 "adrfam": "IPv4", 00:15:42.194 "traddr": "10.0.0.2", 00:15:42.194 "trsvcid": "4420" 00:15:42.194 }, 00:15:42.194 "peer_address": { 00:15:42.194 "trtype": "TCP", 00:15:42.194 "adrfam": "IPv4", 00:15:42.194 "traddr": "10.0.0.1", 00:15:42.194 "trsvcid": "48566" 00:15:42.194 }, 00:15:42.194 "auth": { 00:15:42.194 "state": "completed", 00:15:42.194 "digest": "sha256", 00:15:42.194 "dhgroup": "ffdhe3072" 00:15:42.194 } 00:15:42.194 } 00:15:42.194 ]' 00:15:42.194 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.194 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.194 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.194 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.194 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.194 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.194 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.194 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.761 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:15:42.761 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.696 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.262 00:15:44.262 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.262 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.262 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.521 { 00:15:44.521 "cntlid": 23, 00:15:44.521 "qid": 0, 00:15:44.521 "state": "enabled", 00:15:44.521 "thread": "nvmf_tgt_poll_group_000", 00:15:44.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:44.521 "listen_address": { 00:15:44.521 "trtype": "TCP", 00:15:44.521 "adrfam": "IPv4", 00:15:44.521 "traddr": "10.0.0.2", 00:15:44.521 "trsvcid": "4420" 00:15:44.521 }, 00:15:44.521 "peer_address": { 00:15:44.521 "trtype": "TCP", 00:15:44.521 "adrfam": "IPv4", 00:15:44.521 "traddr": "10.0.0.1", 00:15:44.521 "trsvcid": "48588" 00:15:44.521 }, 00:15:44.521 "auth": { 00:15:44.521 "state": "completed", 00:15:44.521 "digest": "sha256", 00:15:44.521 "dhgroup": "ffdhe3072" 00:15:44.521 } 00:15:44.521 } 00:15:44.521 ]' 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.521 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.087 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:15:45.087 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:15:46.020 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.020 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:46.020 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.020 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.021 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.586 00:15:46.586 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.586 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.586 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.844 { 00:15:46.844 "cntlid": 25, 00:15:46.844 "qid": 0, 00:15:46.844 "state": "enabled", 00:15:46.844 "thread": "nvmf_tgt_poll_group_000", 00:15:46.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:46.844 "listen_address": { 00:15:46.844 "trtype": "TCP", 00:15:46.844 "adrfam": "IPv4", 00:15:46.844 "traddr": "10.0.0.2", 00:15:46.844 "trsvcid": "4420" 00:15:46.844 }, 00:15:46.844 "peer_address": { 00:15:46.844 "trtype": "TCP", 00:15:46.844 "adrfam": "IPv4", 00:15:46.844 "traddr": "10.0.0.1", 00:15:46.844 "trsvcid": "48624" 00:15:46.844 }, 00:15:46.844 "auth": { 00:15:46.844 "state": "completed", 00:15:46.844 "digest": "sha256", 00:15:46.844 "dhgroup": "ffdhe4096" 00:15:46.844 } 00:15:46.844 } 00:15:46.844 ]' 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.844 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.410 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:47.410 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:48.345 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.345 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:48.345 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.345 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.346 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.346 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.346 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:48.346 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.346 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.909 00:15:48.909 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.909 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.909 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.167 { 00:15:49.167 "cntlid": 27, 00:15:49.167 "qid": 0, 00:15:49.167 "state": "enabled", 00:15:49.167 "thread": "nvmf_tgt_poll_group_000", 00:15:49.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:49.167 "listen_address": { 00:15:49.167 "trtype": "TCP", 00:15:49.167 "adrfam": "IPv4", 00:15:49.167 "traddr": "10.0.0.2", 00:15:49.167 "trsvcid": "4420" 00:15:49.167 }, 00:15:49.167 "peer_address": { 00:15:49.167 "trtype": "TCP", 00:15:49.167 "adrfam": "IPv4", 00:15:49.167 "traddr": "10.0.0.1", 00:15:49.167 "trsvcid": "48650" 00:15:49.167 }, 00:15:49.167 "auth": { 00:15:49.167 "state": "completed", 00:15:49.167 "digest": "sha256", 00:15:49.167 "dhgroup": "ffdhe4096" 00:15:49.167 } 00:15:49.167 } 00:15:49.167 ]' 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.167 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.167 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:49.167 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.167 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.167 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.167 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.732 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:49.732 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:50.297 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.297 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:50.297 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.297 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.555 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.555 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.555 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:50.555 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.813 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.070 00:15:51.070 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.070 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.070 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.328 { 00:15:51.328 "cntlid": 29, 00:15:51.328 "qid": 0, 00:15:51.328 "state": "enabled", 00:15:51.328 "thread": "nvmf_tgt_poll_group_000", 00:15:51.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:51.328 "listen_address": { 00:15:51.328 "trtype": "TCP", 00:15:51.328 "adrfam": "IPv4", 00:15:51.328 "traddr": "10.0.0.2", 00:15:51.328 "trsvcid": "4420" 00:15:51.328 }, 00:15:51.328 "peer_address": { 00:15:51.328 "trtype": "TCP", 00:15:51.328 "adrfam": "IPv4", 00:15:51.328 "traddr": "10.0.0.1", 00:15:51.328 "trsvcid": "56112" 00:15:51.328 }, 00:15:51.328 "auth": { 00:15:51.328 "state": "completed", 00:15:51.328 "digest": "sha256", 00:15:51.328 "dhgroup": "ffdhe4096" 00:15:51.328 } 00:15:51.328 } 00:15:51.328 ]' 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.328 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.585 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.585 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.585 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.585 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.585 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.843 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:15:51.843 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:15:52.778 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.778 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.778 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.778 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.778 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.778 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.778 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.778 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.035 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.602 00:15:53.602 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.602 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.602 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.861 { 00:15:53.861 "cntlid": 31, 00:15:53.861 "qid": 0, 00:15:53.861 "state": "enabled", 00:15:53.861 "thread": "nvmf_tgt_poll_group_000", 00:15:53.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:53.861 "listen_address": { 00:15:53.861 "trtype": "TCP", 00:15:53.861 "adrfam": "IPv4", 00:15:53.861 "traddr": "10.0.0.2", 00:15:53.861 "trsvcid": "4420" 00:15:53.861 }, 00:15:53.861 "peer_address": { 00:15:53.861 "trtype": "TCP", 00:15:53.861 "adrfam": "IPv4", 00:15:53.861 "traddr": "10.0.0.1", 00:15:53.861 "trsvcid": "56140" 00:15:53.861 }, 00:15:53.861 "auth": { 00:15:53.861 "state": "completed", 00:15:53.861 "digest": "sha256", 00:15:53.861 "dhgroup": "ffdhe4096" 00:15:53.861 } 00:15:53.861 } 00:15:53.861 ]' 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.861 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.122 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:15:54.122 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.129 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.387 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.952 00:15:55.952 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.952 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.952 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.209 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.209 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.210 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.210 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.210 { 00:15:56.210 "cntlid": 33, 00:15:56.210 "qid": 0, 00:15:56.210 "state": "enabled", 00:15:56.210 "thread": "nvmf_tgt_poll_group_000", 00:15:56.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:56.210 "listen_address": { 00:15:56.210 "trtype": "TCP", 00:15:56.210 "adrfam": "IPv4", 00:15:56.210 "traddr": "10.0.0.2", 00:15:56.210 "trsvcid": "4420" 00:15:56.210 }, 00:15:56.210 "peer_address": { 00:15:56.210 "trtype": "TCP", 00:15:56.210 "adrfam": "IPv4", 00:15:56.210 "traddr": "10.0.0.1", 00:15:56.210 "trsvcid": "56172" 00:15:56.210 }, 00:15:56.210 "auth": { 00:15:56.210 "state": "completed", 00:15:56.210 "digest": "sha256", 00:15:56.210 "dhgroup": "ffdhe6144" 00:15:56.210 } 00:15:56.210 } 00:15:56.210 ]' 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.210 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.775 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:56.775 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:15:57.708 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.708 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.708 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.708 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.708 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.709 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.273 00:15:58.273 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.273 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.273 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.531 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.531 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.531 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.531 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.531 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.531 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.531 { 00:15:58.531 "cntlid": 35, 00:15:58.531 "qid": 0, 00:15:58.531 "state": "enabled", 00:15:58.531 "thread": "nvmf_tgt_poll_group_000", 00:15:58.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:58.531 "listen_address": { 00:15:58.531 "trtype": "TCP", 00:15:58.531 "adrfam": "IPv4", 00:15:58.531 "traddr": "10.0.0.2", 00:15:58.531 "trsvcid": "4420" 00:15:58.531 }, 00:15:58.531 "peer_address": { 00:15:58.531 "trtype": "TCP", 00:15:58.531 "adrfam": "IPv4", 00:15:58.531 "traddr": "10.0.0.1", 00:15:58.531 "trsvcid": "56190" 00:15:58.531 }, 00:15:58.531 "auth": { 00:15:58.531 "state": "completed", 00:15:58.531 "digest": "sha256", 00:15:58.531 "dhgroup": "ffdhe6144" 00:15:58.531 } 00:15:58.531 } 00:15:58.531 ]' 00:15:58.531 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.789 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.789 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.789 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.789 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.789 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.789 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.789 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.047 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:59.047 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:15:59.981 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.981 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.981 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.981 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.981 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.981 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.981 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.981 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.239 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.803 00:16:00.803 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.803 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.803 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.060 { 00:16:01.060 "cntlid": 37, 00:16:01.060 "qid": 0, 00:16:01.060 "state": "enabled", 00:16:01.060 "thread": "nvmf_tgt_poll_group_000", 00:16:01.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:01.060 "listen_address": { 00:16:01.060 "trtype": "TCP", 00:16:01.060 "adrfam": "IPv4", 00:16:01.060 "traddr": "10.0.0.2", 00:16:01.060 "trsvcid": "4420" 00:16:01.060 }, 00:16:01.060 "peer_address": { 00:16:01.060 "trtype": "TCP", 00:16:01.060 "adrfam": "IPv4", 00:16:01.060 "traddr": "10.0.0.1", 00:16:01.060 "trsvcid": "47794" 00:16:01.060 }, 00:16:01.060 "auth": { 00:16:01.060 "state": "completed", 00:16:01.060 "digest": "sha256", 00:16:01.060 "dhgroup": "ffdhe6144" 00:16:01.060 } 00:16:01.060 } 00:16:01.060 ]' 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.060 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.625 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:01.625 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:02.190 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.447 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.447 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.447 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.447 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.447 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.447 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:02.447 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.704 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.270 00:16:03.270 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.270 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.270 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.528 { 00:16:03.528 "cntlid": 39, 00:16:03.528 "qid": 0, 00:16:03.528 "state": "enabled", 00:16:03.528 "thread": "nvmf_tgt_poll_group_000", 00:16:03.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:03.528 "listen_address": { 00:16:03.528 "trtype": "TCP", 00:16:03.528 "adrfam": "IPv4", 00:16:03.528 "traddr": "10.0.0.2", 00:16:03.528 "trsvcid": "4420" 00:16:03.528 }, 00:16:03.528 "peer_address": { 00:16:03.528 "trtype": "TCP", 00:16:03.528 "adrfam": "IPv4", 00:16:03.528 "traddr": "10.0.0.1", 00:16:03.528 "trsvcid": "47822" 00:16:03.528 }, 00:16:03.528 "auth": { 00:16:03.528 "state": "completed", 00:16:03.528 "digest": "sha256", 00:16:03.528 "dhgroup": "ffdhe6144" 00:16:03.528 } 00:16:03.528 } 00:16:03.528 ]' 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.528 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.786 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:03.786 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:04.722 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.980 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.981 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.981 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.912 00:16:05.912 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.912 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.912 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.170 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.170 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.170 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.170 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.170 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.170 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.170 { 00:16:06.170 "cntlid": 41, 00:16:06.170 "qid": 0, 00:16:06.170 "state": "enabled", 00:16:06.170 "thread": "nvmf_tgt_poll_group_000", 00:16:06.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:06.170 "listen_address": { 00:16:06.170 "trtype": "TCP", 00:16:06.170 "adrfam": "IPv4", 00:16:06.170 "traddr": "10.0.0.2", 00:16:06.170 "trsvcid": "4420" 00:16:06.170 }, 00:16:06.170 "peer_address": { 00:16:06.170 "trtype": "TCP", 00:16:06.170 "adrfam": "IPv4", 00:16:06.170 "traddr": "10.0.0.1", 00:16:06.170 "trsvcid": "47838" 00:16:06.170 }, 00:16:06.170 "auth": { 00:16:06.170 "state": "completed", 00:16:06.170 "digest": "sha256", 00:16:06.170 "dhgroup": "ffdhe8192" 00:16:06.170 } 00:16:06.170 } 00:16:06.170 ]' 00:16:06.170 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.427 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.427 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.427 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.427 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.427 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.427 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.427 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.684 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:06.684 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:07.618 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.618 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.618 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.618 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.618 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.618 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.618 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:07.618 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.877 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.809 00:16:08.809 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.809 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.809 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.116 { 00:16:09.116 "cntlid": 43, 00:16:09.116 "qid": 0, 00:16:09.116 "state": "enabled", 00:16:09.116 "thread": "nvmf_tgt_poll_group_000", 00:16:09.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:09.116 "listen_address": { 00:16:09.116 "trtype": "TCP", 00:16:09.116 "adrfam": "IPv4", 00:16:09.116 "traddr": "10.0.0.2", 00:16:09.116 "trsvcid": "4420" 00:16:09.116 }, 00:16:09.116 "peer_address": { 00:16:09.116 "trtype": "TCP", 00:16:09.116 "adrfam": "IPv4", 00:16:09.116 "traddr": "10.0.0.1", 00:16:09.116 "trsvcid": "47864" 00:16:09.116 }, 00:16:09.116 "auth": { 00:16:09.116 "state": "completed", 00:16:09.116 "digest": "sha256", 00:16:09.116 "dhgroup": "ffdhe8192" 00:16:09.116 } 00:16:09.116 } 00:16:09.116 ]' 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.116 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.373 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:09.373 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:10.304 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.304 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:10.304 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.304 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.304 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.304 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.304 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.304 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.561 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.493 00:16:11.493 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.493 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.493 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.750 { 00:16:11.750 "cntlid": 45, 00:16:11.750 "qid": 0, 00:16:11.750 "state": "enabled", 00:16:11.750 "thread": "nvmf_tgt_poll_group_000", 00:16:11.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:11.750 "listen_address": { 00:16:11.750 "trtype": "TCP", 00:16:11.750 "adrfam": "IPv4", 00:16:11.750 "traddr": "10.0.0.2", 00:16:11.750 "trsvcid": "4420" 00:16:11.750 }, 00:16:11.750 "peer_address": { 00:16:11.750 "trtype": "TCP", 00:16:11.750 "adrfam": "IPv4", 00:16:11.750 "traddr": "10.0.0.1", 00:16:11.750 "trsvcid": "46708" 00:16:11.750 }, 00:16:11.750 "auth": { 00:16:11.750 "state": "completed", 00:16:11.750 "digest": "sha256", 00:16:11.750 "dhgroup": "ffdhe8192" 00:16:11.750 } 00:16:11.750 } 00:16:11.750 ]' 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.750 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.008 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.008 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.008 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.266 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:12.266 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:13.203 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.203 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:13.203 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.203 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.203 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.203 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.203 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.203 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.461 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.393 00:16:14.393 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.393 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.393 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.651 { 00:16:14.651 "cntlid": 47, 00:16:14.651 "qid": 0, 00:16:14.651 "state": "enabled", 00:16:14.651 "thread": "nvmf_tgt_poll_group_000", 00:16:14.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:14.651 "listen_address": { 00:16:14.651 "trtype": "TCP", 00:16:14.651 "adrfam": "IPv4", 00:16:14.651 "traddr": "10.0.0.2", 00:16:14.651 "trsvcid": "4420" 00:16:14.651 }, 00:16:14.651 "peer_address": { 00:16:14.651 "trtype": "TCP", 00:16:14.651 "adrfam": "IPv4", 00:16:14.651 "traddr": "10.0.0.1", 00:16:14.651 "trsvcid": "46724" 00:16:14.651 }, 00:16:14.651 "auth": { 00:16:14.651 "state": "completed", 00:16:14.651 "digest": "sha256", 00:16:14.651 "dhgroup": "ffdhe8192" 00:16:14.651 } 00:16:14.651 } 00:16:14.651 ]' 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.651 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.909 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:14.909 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:15.841 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.405 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.662 00:16:16.662 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.662 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.662 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.920 { 00:16:16.920 "cntlid": 49, 00:16:16.920 "qid": 0, 00:16:16.920 "state": "enabled", 00:16:16.920 "thread": "nvmf_tgt_poll_group_000", 00:16:16.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:16.920 "listen_address": { 00:16:16.920 "trtype": "TCP", 00:16:16.920 "adrfam": "IPv4", 00:16:16.920 "traddr": "10.0.0.2", 00:16:16.920 "trsvcid": "4420" 00:16:16.920 }, 00:16:16.920 "peer_address": { 00:16:16.920 "trtype": "TCP", 00:16:16.920 "adrfam": "IPv4", 00:16:16.920 "traddr": "10.0.0.1", 00:16:16.920 "trsvcid": "46748" 00:16:16.920 }, 00:16:16.920 "auth": { 00:16:16.920 "state": "completed", 00:16:16.920 "digest": "sha384", 00:16:16.920 "dhgroup": "null" 00:16:16.920 } 00:16:16.920 } 00:16:16.920 ]' 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.920 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.177 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.178 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.178 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.435 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:17.435 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:18.368 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.368 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:18.368 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.368 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.368 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.368 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.368 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.368 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.627 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.884 00:16:18.884 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.884 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.884 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.142 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.142 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.142 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.142 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.142 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.142 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.142 { 00:16:19.142 "cntlid": 51, 00:16:19.142 "qid": 0, 00:16:19.142 "state": "enabled", 00:16:19.142 "thread": "nvmf_tgt_poll_group_000", 00:16:19.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:19.142 "listen_address": { 00:16:19.142 "trtype": "TCP", 00:16:19.142 "adrfam": "IPv4", 00:16:19.142 "traddr": "10.0.0.2", 00:16:19.142 "trsvcid": "4420" 00:16:19.142 }, 00:16:19.142 "peer_address": { 00:16:19.142 "trtype": "TCP", 00:16:19.142 "adrfam": "IPv4", 00:16:19.142 "traddr": "10.0.0.1", 00:16:19.142 "trsvcid": "46768" 00:16:19.142 }, 00:16:19.142 "auth": { 00:16:19.142 "state": "completed", 00:16:19.142 "digest": "sha384", 00:16:19.142 "dhgroup": "null" 00:16:19.142 } 00:16:19.142 } 00:16:19.142 ]' 00:16:19.142 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.142 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.142 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.398 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.398 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.398 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.398 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.398 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.655 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:19.655 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:20.584 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.585 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:20.585 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.585 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.585 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.585 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.585 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.585 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.842 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.408 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.408 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.408 { 00:16:21.408 "cntlid": 53, 00:16:21.408 "qid": 0, 00:16:21.408 "state": "enabled", 00:16:21.408 "thread": "nvmf_tgt_poll_group_000", 00:16:21.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:21.408 "listen_address": { 00:16:21.408 "trtype": "TCP", 00:16:21.408 "adrfam": "IPv4", 00:16:21.408 "traddr": "10.0.0.2", 00:16:21.408 "trsvcid": "4420" 00:16:21.408 }, 00:16:21.408 "peer_address": { 00:16:21.408 "trtype": "TCP", 00:16:21.408 "adrfam": "IPv4", 00:16:21.408 "traddr": "10.0.0.1", 00:16:21.408 "trsvcid": "51822" 00:16:21.408 }, 00:16:21.408 "auth": { 00:16:21.408 "state": "completed", 00:16:21.408 "digest": "sha384", 00:16:21.408 "dhgroup": "null" 00:16:21.408 } 00:16:21.408 } 00:16:21.408 ]' 00:16:21.665 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.665 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.665 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.665 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.665 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.665 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.666 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.666 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.923 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:21.923 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:22.855 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.855 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:22.855 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.855 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.855 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.855 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.855 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.855 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.113 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.679 00:16:23.679 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.679 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.679 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.679 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.679 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.679 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.679 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.936 { 00:16:23.936 "cntlid": 55, 00:16:23.936 "qid": 0, 00:16:23.936 "state": "enabled", 00:16:23.936 "thread": "nvmf_tgt_poll_group_000", 00:16:23.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:23.936 "listen_address": { 00:16:23.936 "trtype": "TCP", 00:16:23.936 "adrfam": "IPv4", 00:16:23.936 "traddr": "10.0.0.2", 00:16:23.936 "trsvcid": "4420" 00:16:23.936 }, 00:16:23.936 "peer_address": { 00:16:23.936 "trtype": "TCP", 00:16:23.936 "adrfam": "IPv4", 00:16:23.936 "traddr": "10.0.0.1", 00:16:23.936 "trsvcid": "51864" 00:16:23.936 }, 00:16:23.936 "auth": { 00:16:23.936 "state": "completed", 00:16:23.936 "digest": "sha384", 00:16:23.936 "dhgroup": "null" 00:16:23.936 } 00:16:23.936 } 00:16:23.936 ]' 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.936 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.937 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.194 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:24.194 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.127 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.385 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.643 00:16:25.643 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.643 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.643 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.209 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.209 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.209 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.209 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.210 { 00:16:26.210 "cntlid": 57, 00:16:26.210 "qid": 0, 00:16:26.210 "state": "enabled", 00:16:26.210 "thread": "nvmf_tgt_poll_group_000", 00:16:26.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:26.210 "listen_address": { 00:16:26.210 "trtype": "TCP", 00:16:26.210 "adrfam": "IPv4", 00:16:26.210 "traddr": "10.0.0.2", 00:16:26.210 "trsvcid": "4420" 00:16:26.210 }, 00:16:26.210 "peer_address": { 00:16:26.210 "trtype": "TCP", 00:16:26.210 "adrfam": "IPv4", 00:16:26.210 "traddr": "10.0.0.1", 00:16:26.210 "trsvcid": "51886" 00:16:26.210 }, 00:16:26.210 "auth": { 00:16:26.210 "state": "completed", 00:16:26.210 "digest": "sha384", 00:16:26.210 "dhgroup": "ffdhe2048" 00:16:26.210 } 00:16:26.210 } 00:16:26.210 ]' 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.210 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.467 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:26.467 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:27.494 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.494 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:27.494 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.494 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.494 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.494 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.494 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.494 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.752 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.011 00:16:28.011 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.011 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.011 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.269 { 00:16:28.269 "cntlid": 59, 00:16:28.269 "qid": 0, 00:16:28.269 "state": "enabled", 00:16:28.269 "thread": "nvmf_tgt_poll_group_000", 00:16:28.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:28.269 "listen_address": { 00:16:28.269 "trtype": "TCP", 00:16:28.269 "adrfam": "IPv4", 00:16:28.269 "traddr": "10.0.0.2", 00:16:28.269 "trsvcid": "4420" 00:16:28.269 }, 00:16:28.269 "peer_address": { 00:16:28.269 "trtype": "TCP", 00:16:28.269 "adrfam": "IPv4", 00:16:28.269 "traddr": "10.0.0.1", 00:16:28.269 "trsvcid": "51904" 00:16:28.269 }, 00:16:28.269 "auth": { 00:16:28.269 "state": "completed", 00:16:28.269 "digest": "sha384", 00:16:28.269 "dhgroup": "ffdhe2048" 00:16:28.269 } 00:16:28.269 } 00:16:28.269 ]' 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.269 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.528 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.528 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.528 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.528 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.528 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.786 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:28.786 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:29.720 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.720 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:29.720 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.720 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.720 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.720 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.720 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.720 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.979 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.236 00:16:30.236 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.236 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.236 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.494 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.494 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.494 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.494 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.494 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.494 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.494 { 00:16:30.494 "cntlid": 61, 00:16:30.494 "qid": 0, 00:16:30.494 "state": "enabled", 00:16:30.494 "thread": "nvmf_tgt_poll_group_000", 00:16:30.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:30.494 "listen_address": { 00:16:30.494 "trtype": "TCP", 00:16:30.494 "adrfam": "IPv4", 00:16:30.494 "traddr": "10.0.0.2", 00:16:30.494 "trsvcid": "4420" 00:16:30.494 }, 00:16:30.494 "peer_address": { 00:16:30.494 "trtype": "TCP", 00:16:30.494 "adrfam": "IPv4", 00:16:30.494 "traddr": "10.0.0.1", 00:16:30.494 "trsvcid": "47484" 00:16:30.494 }, 00:16:30.494 "auth": { 00:16:30.494 "state": "completed", 00:16:30.494 "digest": "sha384", 00:16:30.494 "dhgroup": "ffdhe2048" 00:16:30.494 } 00:16:30.494 } 00:16:30.494 ]' 00:16:30.494 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.753 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.753 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.753 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.753 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.753 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.753 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.753 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.011 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:31.011 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:31.944 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.944 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.944 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.944 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.944 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.944 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.945 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:31.945 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.203 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.460 00:16:32.460 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.460 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.460 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.717 { 00:16:32.717 "cntlid": 63, 00:16:32.717 "qid": 0, 00:16:32.717 "state": "enabled", 00:16:32.717 "thread": "nvmf_tgt_poll_group_000", 00:16:32.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:32.717 "listen_address": { 00:16:32.717 "trtype": "TCP", 00:16:32.717 "adrfam": "IPv4", 00:16:32.717 "traddr": "10.0.0.2", 00:16:32.717 "trsvcid": "4420" 00:16:32.717 }, 00:16:32.717 "peer_address": { 00:16:32.717 "trtype": "TCP", 00:16:32.717 "adrfam": "IPv4", 00:16:32.717 "traddr": "10.0.0.1", 00:16:32.717 "trsvcid": "47514" 00:16:32.717 }, 00:16:32.717 "auth": { 00:16:32.717 "state": "completed", 00:16:32.717 "digest": "sha384", 00:16:32.717 "dhgroup": "ffdhe2048" 00:16:32.717 } 00:16:32.717 } 00:16:32.717 ]' 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.717 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.975 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.975 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.975 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.975 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.975 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.233 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:33.233 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.167 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.425 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.683 00:16:34.683 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.683 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.683 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.942 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.942 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.942 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.942 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.200 { 00:16:35.200 "cntlid": 65, 00:16:35.200 "qid": 0, 00:16:35.200 "state": "enabled", 00:16:35.200 "thread": "nvmf_tgt_poll_group_000", 00:16:35.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:35.200 "listen_address": { 00:16:35.200 "trtype": "TCP", 00:16:35.200 "adrfam": "IPv4", 00:16:35.200 "traddr": "10.0.0.2", 00:16:35.200 "trsvcid": "4420" 00:16:35.200 }, 00:16:35.200 "peer_address": { 00:16:35.200 "trtype": "TCP", 00:16:35.200 "adrfam": "IPv4", 00:16:35.200 "traddr": "10.0.0.1", 00:16:35.200 "trsvcid": "47546" 00:16:35.200 }, 00:16:35.200 "auth": { 00:16:35.200 "state": "completed", 00:16:35.200 "digest": "sha384", 00:16:35.200 "dhgroup": "ffdhe3072" 00:16:35.200 } 00:16:35.200 } 00:16:35.200 ]' 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.200 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.458 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:35.458 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:36.392 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.392 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:36.392 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.392 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.392 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.392 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.392 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:36.392 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.650 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.907 00:16:36.907 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.907 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.907 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.166 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.166 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.166 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.166 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.166 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.166 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.166 { 00:16:37.166 "cntlid": 67, 00:16:37.166 "qid": 0, 00:16:37.166 "state": "enabled", 00:16:37.166 "thread": "nvmf_tgt_poll_group_000", 00:16:37.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:37.166 "listen_address": { 00:16:37.166 "trtype": "TCP", 00:16:37.166 "adrfam": "IPv4", 00:16:37.166 "traddr": "10.0.0.2", 00:16:37.166 "trsvcid": "4420" 00:16:37.166 }, 00:16:37.166 "peer_address": { 00:16:37.166 "trtype": "TCP", 00:16:37.166 "adrfam": "IPv4", 00:16:37.166 "traddr": "10.0.0.1", 00:16:37.166 "trsvcid": "47578" 00:16:37.166 }, 00:16:37.166 "auth": { 00:16:37.166 "state": "completed", 00:16:37.166 "digest": "sha384", 00:16:37.166 "dhgroup": "ffdhe3072" 00:16:37.166 } 00:16:37.166 } 00:16:37.166 ]' 00:16:37.166 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.424 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.424 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.424 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.424 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.424 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.425 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.425 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:37.682 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:38.614 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.614 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:38.614 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.614 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.614 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.614 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.614 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.614 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.872 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.438 00:16:39.438 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.438 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.438 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.696 { 00:16:39.696 "cntlid": 69, 00:16:39.696 "qid": 0, 00:16:39.696 "state": "enabled", 00:16:39.696 "thread": "nvmf_tgt_poll_group_000", 00:16:39.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:39.696 "listen_address": { 00:16:39.696 "trtype": "TCP", 00:16:39.696 "adrfam": "IPv4", 00:16:39.696 "traddr": "10.0.0.2", 00:16:39.696 "trsvcid": "4420" 00:16:39.696 }, 00:16:39.696 "peer_address": { 00:16:39.696 "trtype": "TCP", 00:16:39.696 "adrfam": "IPv4", 00:16:39.696 "traddr": "10.0.0.1", 00:16:39.696 "trsvcid": "45112" 00:16:39.696 }, 00:16:39.696 "auth": { 00:16:39.696 "state": "completed", 00:16:39.696 "digest": "sha384", 00:16:39.696 "dhgroup": "ffdhe3072" 00:16:39.696 } 00:16:39.696 } 00:16:39.696 ]' 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.696 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.954 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:39.954 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:40.888 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.888 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.888 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.888 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.888 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.888 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.888 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.888 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.146 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.712 00:16:41.712 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.712 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.712 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.970 { 00:16:41.970 "cntlid": 71, 00:16:41.970 "qid": 0, 00:16:41.970 "state": "enabled", 00:16:41.970 "thread": "nvmf_tgt_poll_group_000", 00:16:41.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:41.970 "listen_address": { 00:16:41.970 "trtype": "TCP", 00:16:41.970 "adrfam": "IPv4", 00:16:41.970 "traddr": "10.0.0.2", 00:16:41.970 "trsvcid": "4420" 00:16:41.970 }, 00:16:41.970 "peer_address": { 00:16:41.970 "trtype": "TCP", 00:16:41.970 "adrfam": "IPv4", 00:16:41.970 "traddr": "10.0.0.1", 00:16:41.970 "trsvcid": "45134" 00:16:41.970 }, 00:16:41.970 "auth": { 00:16:41.970 "state": "completed", 00:16:41.970 "digest": "sha384", 00:16:41.970 "dhgroup": "ffdhe3072" 00:16:41.970 } 00:16:41.970 } 00:16:41.970 ]' 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.970 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.228 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:42.228 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.170 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.428 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.994 00:16:43.994 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.994 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.994 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.252 { 00:16:44.252 "cntlid": 73, 00:16:44.252 "qid": 0, 00:16:44.252 "state": "enabled", 00:16:44.252 "thread": "nvmf_tgt_poll_group_000", 00:16:44.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:44.252 "listen_address": { 00:16:44.252 "trtype": "TCP", 00:16:44.252 "adrfam": "IPv4", 00:16:44.252 "traddr": "10.0.0.2", 00:16:44.252 "trsvcid": "4420" 00:16:44.252 }, 00:16:44.252 "peer_address": { 00:16:44.252 "trtype": "TCP", 00:16:44.252 "adrfam": "IPv4", 00:16:44.252 "traddr": "10.0.0.1", 00:16:44.252 "trsvcid": "45152" 00:16:44.252 }, 00:16:44.252 "auth": { 00:16:44.252 "state": "completed", 00:16:44.252 "digest": "sha384", 00:16:44.252 "dhgroup": "ffdhe4096" 00:16:44.252 } 00:16:44.252 } 00:16:44.252 ]' 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.252 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.818 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:44.818 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.752 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.318 00:16:46.318 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.318 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.318 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.577 { 00:16:46.577 "cntlid": 75, 00:16:46.577 "qid": 0, 00:16:46.577 "state": "enabled", 00:16:46.577 "thread": "nvmf_tgt_poll_group_000", 00:16:46.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:46.577 "listen_address": { 00:16:46.577 "trtype": "TCP", 00:16:46.577 "adrfam": "IPv4", 00:16:46.577 "traddr": "10.0.0.2", 00:16:46.577 "trsvcid": "4420" 00:16:46.577 }, 00:16:46.577 "peer_address": { 00:16:46.577 "trtype": "TCP", 00:16:46.577 "adrfam": "IPv4", 00:16:46.577 "traddr": "10.0.0.1", 00:16:46.577 "trsvcid": "45194" 00:16:46.577 }, 00:16:46.577 "auth": { 00:16:46.577 "state": "completed", 00:16:46.577 "digest": "sha384", 00:16:46.577 "dhgroup": "ffdhe4096" 00:16:46.577 } 00:16:46.577 } 00:16:46.577 ]' 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.577 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.835 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:46.835 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:47.769 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.769 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.769 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.769 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.769 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.769 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.769 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.769 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.027 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.594 00:16:48.594 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.594 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.594 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.852 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.852 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.852 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.852 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.852 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.852 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.852 { 00:16:48.852 "cntlid": 77, 00:16:48.852 "qid": 0, 00:16:48.852 "state": "enabled", 00:16:48.852 "thread": "nvmf_tgt_poll_group_000", 00:16:48.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:48.852 "listen_address": { 00:16:48.852 "trtype": "TCP", 00:16:48.852 "adrfam": "IPv4", 00:16:48.852 "traddr": "10.0.0.2", 00:16:48.852 "trsvcid": "4420" 00:16:48.852 }, 00:16:48.852 "peer_address": { 00:16:48.852 "trtype": "TCP", 00:16:48.852 "adrfam": "IPv4", 00:16:48.852 "traddr": "10.0.0.1", 00:16:48.852 "trsvcid": "45232" 00:16:48.852 }, 00:16:48.852 "auth": { 00:16:48.852 "state": "completed", 00:16:48.852 "digest": "sha384", 00:16:48.852 "dhgroup": "ffdhe4096" 00:16:48.852 } 00:16:48.852 } 00:16:48.852 ]' 00:16:48.852 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.853 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.853 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.853 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.853 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.853 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.853 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.853 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.111 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:49.111 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:50.044 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.044 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:50.044 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.044 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.044 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.044 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.044 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.044 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.302 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.868 00:16:50.868 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.868 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.868 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.127 { 00:16:51.127 "cntlid": 79, 00:16:51.127 "qid": 0, 00:16:51.127 "state": "enabled", 00:16:51.127 "thread": "nvmf_tgt_poll_group_000", 00:16:51.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:51.127 "listen_address": { 00:16:51.127 "trtype": "TCP", 00:16:51.127 "adrfam": "IPv4", 00:16:51.127 "traddr": "10.0.0.2", 00:16:51.127 "trsvcid": "4420" 00:16:51.127 }, 00:16:51.127 "peer_address": { 00:16:51.127 "trtype": "TCP", 00:16:51.127 "adrfam": "IPv4", 00:16:51.127 "traddr": "10.0.0.1", 00:16:51.127 "trsvcid": "38102" 00:16:51.127 }, 00:16:51.127 "auth": { 00:16:51.127 "state": "completed", 00:16:51.127 "digest": "sha384", 00:16:51.127 "dhgroup": "ffdhe4096" 00:16:51.127 } 00:16:51.127 } 00:16:51.127 ]' 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.127 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.127 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.127 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.384 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.384 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.384 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.643 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:51.643 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:52.574 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.832 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.399 00:16:53.399 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.399 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.399 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.657 { 00:16:53.657 "cntlid": 81, 00:16:53.657 "qid": 0, 00:16:53.657 "state": "enabled", 00:16:53.657 "thread": "nvmf_tgt_poll_group_000", 00:16:53.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:53.657 "listen_address": { 00:16:53.657 "trtype": "TCP", 00:16:53.657 "adrfam": "IPv4", 00:16:53.657 "traddr": "10.0.0.2", 00:16:53.657 "trsvcid": "4420" 00:16:53.657 }, 00:16:53.657 "peer_address": { 00:16:53.657 "trtype": "TCP", 00:16:53.657 "adrfam": "IPv4", 00:16:53.657 "traddr": "10.0.0.1", 00:16:53.657 "trsvcid": "38134" 00:16:53.657 }, 00:16:53.657 "auth": { 00:16:53.657 "state": "completed", 00:16:53.657 "digest": "sha384", 00:16:53.657 "dhgroup": "ffdhe6144" 00:16:53.657 } 00:16:53.657 } 00:16:53.657 ]' 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.657 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.915 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.915 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.915 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.915 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.916 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.173 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:54.173 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:16:55.107 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.107 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:55.107 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.107 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.107 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.107 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.107 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.107 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.365 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.366 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.984 00:16:55.984 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.984 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.984 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.268 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.268 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.268 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.268 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.268 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.268 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.268 { 00:16:56.268 "cntlid": 83, 00:16:56.268 "qid": 0, 00:16:56.268 "state": "enabled", 00:16:56.268 "thread": "nvmf_tgt_poll_group_000", 00:16:56.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:56.268 "listen_address": { 00:16:56.268 "trtype": "TCP", 00:16:56.268 "adrfam": "IPv4", 00:16:56.268 "traddr": "10.0.0.2", 00:16:56.268 "trsvcid": "4420" 00:16:56.268 }, 00:16:56.268 "peer_address": { 00:16:56.268 "trtype": "TCP", 00:16:56.268 "adrfam": "IPv4", 00:16:56.268 "traddr": "10.0.0.1", 00:16:56.268 "trsvcid": "38154" 00:16:56.268 }, 00:16:56.268 "auth": { 00:16:56.268 "state": "completed", 00:16:56.268 "digest": "sha384", 00:16:56.268 "dhgroup": "ffdhe6144" 00:16:56.268 } 00:16:56.268 } 00:16:56.268 ]' 00:16:56.268 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.268 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.268 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.268 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.268 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.268 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.268 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.268 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.528 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:56.528 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:16:57.461 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.461 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:57.461 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.461 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.461 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.461 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.461 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.461 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.719 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.286 00:16:58.286 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.286 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.286 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.545 { 00:16:58.545 "cntlid": 85, 00:16:58.545 "qid": 0, 00:16:58.545 "state": "enabled", 00:16:58.545 "thread": "nvmf_tgt_poll_group_000", 00:16:58.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:58.545 "listen_address": { 00:16:58.545 "trtype": "TCP", 00:16:58.545 "adrfam": "IPv4", 00:16:58.545 "traddr": "10.0.0.2", 00:16:58.545 "trsvcid": "4420" 00:16:58.545 }, 00:16:58.545 "peer_address": { 00:16:58.545 "trtype": "TCP", 00:16:58.545 "adrfam": "IPv4", 00:16:58.545 "traddr": "10.0.0.1", 00:16:58.545 "trsvcid": "38190" 00:16:58.545 }, 00:16:58.545 "auth": { 00:16:58.545 "state": "completed", 00:16:58.545 "digest": "sha384", 00:16:58.545 "dhgroup": "ffdhe6144" 00:16:58.545 } 00:16:58.545 } 00:16:58.545 ]' 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.545 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.803 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.803 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.803 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.061 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:59.061 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:16:59.995 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.995 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.995 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.995 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.995 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.995 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.995 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.253 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.818 00:17:00.818 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.818 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.818 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.076 { 00:17:01.076 "cntlid": 87, 00:17:01.076 "qid": 0, 00:17:01.076 "state": "enabled", 00:17:01.076 "thread": "nvmf_tgt_poll_group_000", 00:17:01.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:01.076 "listen_address": { 00:17:01.076 "trtype": "TCP", 00:17:01.076 "adrfam": "IPv4", 00:17:01.076 "traddr": "10.0.0.2", 00:17:01.076 "trsvcid": "4420" 00:17:01.076 }, 00:17:01.076 "peer_address": { 00:17:01.076 "trtype": "TCP", 00:17:01.076 "adrfam": "IPv4", 00:17:01.076 "traddr": "10.0.0.1", 00:17:01.076 "trsvcid": "47942" 00:17:01.076 }, 00:17:01.076 "auth": { 00:17:01.076 "state": "completed", 00:17:01.076 "digest": "sha384", 00:17:01.076 "dhgroup": "ffdhe6144" 00:17:01.076 } 00:17:01.076 } 00:17:01.076 ]' 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.076 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.333 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:01.333 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.267 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.524 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.525 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.457 00:17:03.457 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.457 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.457 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.715 { 00:17:03.715 "cntlid": 89, 00:17:03.715 "qid": 0, 00:17:03.715 "state": "enabled", 00:17:03.715 "thread": "nvmf_tgt_poll_group_000", 00:17:03.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:03.715 "listen_address": { 00:17:03.715 "trtype": "TCP", 00:17:03.715 "adrfam": "IPv4", 00:17:03.715 "traddr": "10.0.0.2", 00:17:03.715 "trsvcid": "4420" 00:17:03.715 }, 00:17:03.715 "peer_address": { 00:17:03.715 "trtype": "TCP", 00:17:03.715 "adrfam": "IPv4", 00:17:03.715 "traddr": "10.0.0.1", 00:17:03.715 "trsvcid": "47970" 00:17:03.715 }, 00:17:03.715 "auth": { 00:17:03.715 "state": "completed", 00:17:03.715 "digest": "sha384", 00:17:03.715 "dhgroup": "ffdhe8192" 00:17:03.715 } 00:17:03.715 } 00:17:03.715 ]' 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.715 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.716 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.716 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.716 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.282 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:04.282 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:04.848 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.848 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:04.848 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.848 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.848 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.848 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.848 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.848 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.416 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.982 00:17:05.982 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.982 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.982 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.240 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.240 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.240 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.240 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.497 { 00:17:06.497 "cntlid": 91, 00:17:06.497 "qid": 0, 00:17:06.497 "state": "enabled", 00:17:06.497 "thread": "nvmf_tgt_poll_group_000", 00:17:06.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:06.497 "listen_address": { 00:17:06.497 "trtype": "TCP", 00:17:06.497 "adrfam": "IPv4", 00:17:06.497 "traddr": "10.0.0.2", 00:17:06.497 "trsvcid": "4420" 00:17:06.497 }, 00:17:06.497 "peer_address": { 00:17:06.497 "trtype": "TCP", 00:17:06.497 "adrfam": "IPv4", 00:17:06.497 "traddr": "10.0.0.1", 00:17:06.497 "trsvcid": "47998" 00:17:06.497 }, 00:17:06.497 "auth": { 00:17:06.497 "state": "completed", 00:17:06.497 "digest": "sha384", 00:17:06.497 "dhgroup": "ffdhe8192" 00:17:06.497 } 00:17:06.497 } 00:17:06.497 ]' 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.497 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.756 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:06.756 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:07.689 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.689 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:07.689 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.689 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.689 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.689 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.689 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.689 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.946 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:07.946 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.947 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.883 00:17:08.883 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.883 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.883 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.141 { 00:17:09.141 "cntlid": 93, 00:17:09.141 "qid": 0, 00:17:09.141 "state": "enabled", 00:17:09.141 "thread": "nvmf_tgt_poll_group_000", 00:17:09.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:09.141 "listen_address": { 00:17:09.141 "trtype": "TCP", 00:17:09.141 "adrfam": "IPv4", 00:17:09.141 "traddr": "10.0.0.2", 00:17:09.141 "trsvcid": "4420" 00:17:09.141 }, 00:17:09.141 "peer_address": { 00:17:09.141 "trtype": "TCP", 00:17:09.141 "adrfam": "IPv4", 00:17:09.141 "traddr": "10.0.0.1", 00:17:09.141 "trsvcid": "48022" 00:17:09.141 }, 00:17:09.141 "auth": { 00:17:09.141 "state": "completed", 00:17:09.141 "digest": "sha384", 00:17:09.141 "dhgroup": "ffdhe8192" 00:17:09.141 } 00:17:09.141 } 00:17:09.141 ]' 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.141 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.141 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.141 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.141 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.399 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:09.399 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:10.333 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.333 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.333 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.333 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.333 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.333 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.333 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.333 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.591 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.525 00:17:11.525 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.525 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.525 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.783 { 00:17:11.783 "cntlid": 95, 00:17:11.783 "qid": 0, 00:17:11.783 "state": "enabled", 00:17:11.783 "thread": "nvmf_tgt_poll_group_000", 00:17:11.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:11.783 "listen_address": { 00:17:11.783 "trtype": "TCP", 00:17:11.783 "adrfam": "IPv4", 00:17:11.783 "traddr": "10.0.0.2", 00:17:11.783 "trsvcid": "4420" 00:17:11.783 }, 00:17:11.783 "peer_address": { 00:17:11.783 "trtype": "TCP", 00:17:11.783 "adrfam": "IPv4", 00:17:11.783 "traddr": "10.0.0.1", 00:17:11.783 "trsvcid": "49186" 00:17:11.783 }, 00:17:11.783 "auth": { 00:17:11.783 "state": "completed", 00:17:11.783 "digest": "sha384", 00:17:11.783 "dhgroup": "ffdhe8192" 00:17:11.783 } 00:17:11.783 } 00:17:11.783 ]' 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.783 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.041 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.041 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.041 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.041 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.041 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.299 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:12.299 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:13.232 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.232 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:13.232 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.232 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.232 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.232 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:13.232 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.232 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.232 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.233 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.490 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.491 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.748 00:17:14.006 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.006 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.006 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.264 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.264 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.264 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.264 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.264 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.264 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.264 { 00:17:14.264 "cntlid": 97, 00:17:14.264 "qid": 0, 00:17:14.264 "state": "enabled", 00:17:14.264 "thread": "nvmf_tgt_poll_group_000", 00:17:14.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:14.264 "listen_address": { 00:17:14.264 "trtype": "TCP", 00:17:14.264 "adrfam": "IPv4", 00:17:14.264 "traddr": "10.0.0.2", 00:17:14.264 "trsvcid": "4420" 00:17:14.264 }, 00:17:14.264 "peer_address": { 00:17:14.264 "trtype": "TCP", 00:17:14.264 "adrfam": "IPv4", 00:17:14.264 "traddr": "10.0.0.1", 00:17:14.264 "trsvcid": "49214" 00:17:14.264 }, 00:17:14.264 "auth": { 00:17:14.264 "state": "completed", 00:17:14.264 "digest": "sha512", 00:17:14.264 "dhgroup": "null" 00:17:14.264 } 00:17:14.264 } 00:17:14.264 ]' 00:17:14.264 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.264 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.264 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.264 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:14.264 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.264 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.264 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.264 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.522 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:14.522 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:15.455 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.455 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:15.455 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.455 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.455 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.455 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.455 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.455 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.713 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.971 00:17:15.971 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.971 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.971 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.536 { 00:17:16.536 "cntlid": 99, 00:17:16.536 "qid": 0, 00:17:16.536 "state": "enabled", 00:17:16.536 "thread": "nvmf_tgt_poll_group_000", 00:17:16.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:16.536 "listen_address": { 00:17:16.536 "trtype": "TCP", 00:17:16.536 "adrfam": "IPv4", 00:17:16.536 "traddr": "10.0.0.2", 00:17:16.536 "trsvcid": "4420" 00:17:16.536 }, 00:17:16.536 "peer_address": { 00:17:16.536 "trtype": "TCP", 00:17:16.536 "adrfam": "IPv4", 00:17:16.536 "traddr": "10.0.0.1", 00:17:16.536 "trsvcid": "49240" 00:17:16.536 }, 00:17:16.536 "auth": { 00:17:16.536 "state": "completed", 00:17:16.536 "digest": "sha512", 00:17:16.536 "dhgroup": "null" 00:17:16.536 } 00:17:16.536 } 00:17:16.536 ]' 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.536 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.794 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:16.794 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:17.728 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.728 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:17.728 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.728 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.728 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.728 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.728 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:17.728 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.986 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.244 00:17:18.244 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.244 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.244 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.503 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.503 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.503 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.503 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.503 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.503 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.503 { 00:17:18.503 "cntlid": 101, 00:17:18.503 "qid": 0, 00:17:18.503 "state": "enabled", 00:17:18.503 "thread": "nvmf_tgt_poll_group_000", 00:17:18.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:18.503 "listen_address": { 00:17:18.503 "trtype": "TCP", 00:17:18.503 "adrfam": "IPv4", 00:17:18.503 "traddr": "10.0.0.2", 00:17:18.503 "trsvcid": "4420" 00:17:18.503 }, 00:17:18.503 "peer_address": { 00:17:18.503 "trtype": "TCP", 00:17:18.503 "adrfam": "IPv4", 00:17:18.503 "traddr": "10.0.0.1", 00:17:18.503 "trsvcid": "49268" 00:17:18.503 }, 00:17:18.503 "auth": { 00:17:18.503 "state": "completed", 00:17:18.503 "digest": "sha512", 00:17:18.503 "dhgroup": "null" 00:17:18.503 } 00:17:18.503 } 00:17:18.503 ]' 00:17:18.503 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.759 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.759 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.759 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.759 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.759 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.759 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.760 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.018 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:19.018 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:19.951 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.951 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.951 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.951 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.951 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.951 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.951 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:19.951 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.210 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.468 00:17:20.468 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.468 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.468 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.726 { 00:17:20.726 "cntlid": 103, 00:17:20.726 "qid": 0, 00:17:20.726 "state": "enabled", 00:17:20.726 "thread": "nvmf_tgt_poll_group_000", 00:17:20.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:20.726 "listen_address": { 00:17:20.726 "trtype": "TCP", 00:17:20.726 "adrfam": "IPv4", 00:17:20.726 "traddr": "10.0.0.2", 00:17:20.726 "trsvcid": "4420" 00:17:20.726 }, 00:17:20.726 "peer_address": { 00:17:20.726 "trtype": "TCP", 00:17:20.726 "adrfam": "IPv4", 00:17:20.726 "traddr": "10.0.0.1", 00:17:20.726 "trsvcid": "53586" 00:17:20.726 }, 00:17:20.726 "auth": { 00:17:20.726 "state": "completed", 00:17:20.726 "digest": "sha512", 00:17:20.726 "dhgroup": "null" 00:17:20.726 } 00:17:20.726 } 00:17:20.726 ]' 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.726 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.984 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.984 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.984 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.242 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:21.242 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:22.176 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.434 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.692 00:17:22.692 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.692 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.692 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.258 { 00:17:23.258 "cntlid": 105, 00:17:23.258 "qid": 0, 00:17:23.258 "state": "enabled", 00:17:23.258 "thread": "nvmf_tgt_poll_group_000", 00:17:23.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:23.258 "listen_address": { 00:17:23.258 "trtype": "TCP", 00:17:23.258 "adrfam": "IPv4", 00:17:23.258 "traddr": "10.0.0.2", 00:17:23.258 "trsvcid": "4420" 00:17:23.258 }, 00:17:23.258 "peer_address": { 00:17:23.258 "trtype": "TCP", 00:17:23.258 "adrfam": "IPv4", 00:17:23.258 "traddr": "10.0.0.1", 00:17:23.258 "trsvcid": "53618" 00:17:23.258 }, 00:17:23.258 "auth": { 00:17:23.258 "state": "completed", 00:17:23.258 "digest": "sha512", 00:17:23.258 "dhgroup": "ffdhe2048" 00:17:23.258 } 00:17:23.258 } 00:17:23.258 ]' 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.258 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.258 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.258 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.258 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.516 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:23.516 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:24.450 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.450 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:24.450 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.450 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.450 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.450 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.450 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.450 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.709 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.966 00:17:24.966 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.966 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.966 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.225 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.225 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.225 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.225 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.225 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.225 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.225 { 00:17:25.225 "cntlid": 107, 00:17:25.225 "qid": 0, 00:17:25.225 "state": "enabled", 00:17:25.225 "thread": "nvmf_tgt_poll_group_000", 00:17:25.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:25.225 "listen_address": { 00:17:25.225 "trtype": "TCP", 00:17:25.225 "adrfam": "IPv4", 00:17:25.225 "traddr": "10.0.0.2", 00:17:25.225 "trsvcid": "4420" 00:17:25.225 }, 00:17:25.225 "peer_address": { 00:17:25.225 "trtype": "TCP", 00:17:25.225 "adrfam": "IPv4", 00:17:25.225 "traddr": "10.0.0.1", 00:17:25.225 "trsvcid": "53654" 00:17:25.225 }, 00:17:25.225 "auth": { 00:17:25.225 "state": "completed", 00:17:25.225 "digest": "sha512", 00:17:25.225 "dhgroup": "ffdhe2048" 00:17:25.225 } 00:17:25.225 } 00:17:25.225 ]' 00:17:25.225 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.483 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.483 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.483 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.483 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.483 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.483 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.483 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.746 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:25.746 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:26.746 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.746 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.746 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.746 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.746 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.746 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.746 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.746 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.004 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.260 00:17:27.260 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.260 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.260 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.518 { 00:17:27.518 "cntlid": 109, 00:17:27.518 "qid": 0, 00:17:27.518 "state": "enabled", 00:17:27.518 "thread": "nvmf_tgt_poll_group_000", 00:17:27.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:27.518 "listen_address": { 00:17:27.518 "trtype": "TCP", 00:17:27.518 "adrfam": "IPv4", 00:17:27.518 "traddr": "10.0.0.2", 00:17:27.518 "trsvcid": "4420" 00:17:27.518 }, 00:17:27.518 "peer_address": { 00:17:27.518 "trtype": "TCP", 00:17:27.518 "adrfam": "IPv4", 00:17:27.518 "traddr": "10.0.0.1", 00:17:27.518 "trsvcid": "53680" 00:17:27.518 }, 00:17:27.518 "auth": { 00:17:27.518 "state": "completed", 00:17:27.518 "digest": "sha512", 00:17:27.518 "dhgroup": "ffdhe2048" 00:17:27.518 } 00:17:27.518 } 00:17:27.518 ]' 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.518 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.776 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.776 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.776 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.776 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.776 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.033 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:28.033 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:28.967 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.967 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.967 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.967 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.967 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.967 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.967 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.967 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.225 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.226 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.226 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.484 00:17:29.484 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.484 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.484 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.743 { 00:17:29.743 "cntlid": 111, 00:17:29.743 "qid": 0, 00:17:29.743 "state": "enabled", 00:17:29.743 "thread": "nvmf_tgt_poll_group_000", 00:17:29.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:29.743 "listen_address": { 00:17:29.743 "trtype": "TCP", 00:17:29.743 "adrfam": "IPv4", 00:17:29.743 "traddr": "10.0.0.2", 00:17:29.743 "trsvcid": "4420" 00:17:29.743 }, 00:17:29.743 "peer_address": { 00:17:29.743 "trtype": "TCP", 00:17:29.743 "adrfam": "IPv4", 00:17:29.743 "traddr": "10.0.0.1", 00:17:29.743 "trsvcid": "37898" 00:17:29.743 }, 00:17:29.743 "auth": { 00:17:29.743 "state": "completed", 00:17:29.743 "digest": "sha512", 00:17:29.743 "dhgroup": "ffdhe2048" 00:17:29.743 } 00:17:29.743 } 00:17:29.743 ]' 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.743 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.000 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.000 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.000 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.000 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.000 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.258 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:30.258 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:31.190 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.448 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.705 00:17:31.705 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.705 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.705 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.963 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.963 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.963 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.963 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.963 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.963 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.963 { 00:17:31.963 "cntlid": 113, 00:17:31.963 "qid": 0, 00:17:31.963 "state": "enabled", 00:17:31.963 "thread": "nvmf_tgt_poll_group_000", 00:17:31.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:31.963 "listen_address": { 00:17:31.963 "trtype": "TCP", 00:17:31.963 "adrfam": "IPv4", 00:17:31.963 "traddr": "10.0.0.2", 00:17:31.963 "trsvcid": "4420" 00:17:31.963 }, 00:17:31.963 "peer_address": { 00:17:31.963 "trtype": "TCP", 00:17:31.963 "adrfam": "IPv4", 00:17:31.963 "traddr": "10.0.0.1", 00:17:31.963 "trsvcid": "37920" 00:17:31.963 }, 00:17:31.963 "auth": { 00:17:31.963 "state": "completed", 00:17:31.963 "digest": "sha512", 00:17:31.963 "dhgroup": "ffdhe3072" 00:17:31.963 } 00:17:31.963 } 00:17:31.963 ]' 00:17:31.963 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.220 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.220 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.220 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.220 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.220 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.220 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.220 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.477 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:32.477 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:33.411 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.412 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:33.412 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.412 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.412 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.412 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.412 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.412 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.669 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:33.669 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.670 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.927 00:17:33.927 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.927 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.927 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.185 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.185 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.185 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.185 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.443 { 00:17:34.443 "cntlid": 115, 00:17:34.443 "qid": 0, 00:17:34.443 "state": "enabled", 00:17:34.443 "thread": "nvmf_tgt_poll_group_000", 00:17:34.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:34.443 "listen_address": { 00:17:34.443 "trtype": "TCP", 00:17:34.443 "adrfam": "IPv4", 00:17:34.443 "traddr": "10.0.0.2", 00:17:34.443 "trsvcid": "4420" 00:17:34.443 }, 00:17:34.443 "peer_address": { 00:17:34.443 "trtype": "TCP", 00:17:34.443 "adrfam": "IPv4", 00:17:34.443 "traddr": "10.0.0.1", 00:17:34.443 "trsvcid": "37940" 00:17:34.443 }, 00:17:34.443 "auth": { 00:17:34.443 "state": "completed", 00:17:34.443 "digest": "sha512", 00:17:34.443 "dhgroup": "ffdhe3072" 00:17:34.443 } 00:17:34.443 } 00:17:34.443 ]' 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.443 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.701 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:34.701 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:35.635 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.635 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:35.635 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.635 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.635 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.635 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.635 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.635 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.892 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:35.892 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.892 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.892 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.893 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.150 00:17:36.150 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.150 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.150 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.716 { 00:17:36.716 "cntlid": 117, 00:17:36.716 "qid": 0, 00:17:36.716 "state": "enabled", 00:17:36.716 "thread": "nvmf_tgt_poll_group_000", 00:17:36.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:36.716 "listen_address": { 00:17:36.716 "trtype": "TCP", 00:17:36.716 "adrfam": "IPv4", 00:17:36.716 "traddr": "10.0.0.2", 00:17:36.716 "trsvcid": "4420" 00:17:36.716 }, 00:17:36.716 "peer_address": { 00:17:36.716 "trtype": "TCP", 00:17:36.716 "adrfam": "IPv4", 00:17:36.716 "traddr": "10.0.0.1", 00:17:36.716 "trsvcid": "37978" 00:17:36.716 }, 00:17:36.716 "auth": { 00:17:36.716 "state": "completed", 00:17:36.716 "digest": "sha512", 00:17:36.716 "dhgroup": "ffdhe3072" 00:17:36.716 } 00:17:36.716 } 00:17:36.716 ]' 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.716 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.717 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.717 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.717 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.974 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:36.974 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:37.908 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.908 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.908 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.908 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.908 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.908 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.908 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.908 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.166 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.424 00:17:38.424 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.424 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.424 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.681 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.681 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.681 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.681 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.681 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.681 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.681 { 00:17:38.681 "cntlid": 119, 00:17:38.681 "qid": 0, 00:17:38.681 "state": "enabled", 00:17:38.681 "thread": "nvmf_tgt_poll_group_000", 00:17:38.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:38.681 "listen_address": { 00:17:38.681 "trtype": "TCP", 00:17:38.681 "adrfam": "IPv4", 00:17:38.681 "traddr": "10.0.0.2", 00:17:38.681 "trsvcid": "4420" 00:17:38.681 }, 00:17:38.681 "peer_address": { 00:17:38.681 "trtype": "TCP", 00:17:38.681 "adrfam": "IPv4", 00:17:38.681 "traddr": "10.0.0.1", 00:17:38.681 "trsvcid": "38006" 00:17:38.681 }, 00:17:38.681 "auth": { 00:17:38.681 "state": "completed", 00:17:38.681 "digest": "sha512", 00:17:38.681 "dhgroup": "ffdhe3072" 00:17:38.681 } 00:17:38.681 } 00:17:38.681 ]' 00:17:38.682 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.682 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.682 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.940 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.940 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.940 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.940 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.940 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.198 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:39.198 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:40.131 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.390 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.955 00:17:40.955 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.955 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.955 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.212 { 00:17:41.212 "cntlid": 121, 00:17:41.212 "qid": 0, 00:17:41.212 "state": "enabled", 00:17:41.212 "thread": "nvmf_tgt_poll_group_000", 00:17:41.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:41.212 "listen_address": { 00:17:41.212 "trtype": "TCP", 00:17:41.212 "adrfam": "IPv4", 00:17:41.212 "traddr": "10.0.0.2", 00:17:41.212 "trsvcid": "4420" 00:17:41.212 }, 00:17:41.212 "peer_address": { 00:17:41.212 "trtype": "TCP", 00:17:41.212 "adrfam": "IPv4", 00:17:41.212 "traddr": "10.0.0.1", 00:17:41.212 "trsvcid": "38416" 00:17:41.212 }, 00:17:41.212 "auth": { 00:17:41.212 "state": "completed", 00:17:41.212 "digest": "sha512", 00:17:41.212 "dhgroup": "ffdhe4096" 00:17:41.212 } 00:17:41.212 } 00:17:41.212 ]' 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.212 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.212 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.212 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.212 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.469 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:41.469 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:42.402 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.402 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:42.402 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.402 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.402 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.402 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.402 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.402 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.660 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.921 00:17:43.178 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.178 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.178 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.436 { 00:17:43.436 "cntlid": 123, 00:17:43.436 "qid": 0, 00:17:43.436 "state": "enabled", 00:17:43.436 "thread": "nvmf_tgt_poll_group_000", 00:17:43.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:43.436 "listen_address": { 00:17:43.436 "trtype": "TCP", 00:17:43.436 "adrfam": "IPv4", 00:17:43.436 "traddr": "10.0.0.2", 00:17:43.436 "trsvcid": "4420" 00:17:43.436 }, 00:17:43.436 "peer_address": { 00:17:43.436 "trtype": "TCP", 00:17:43.436 "adrfam": "IPv4", 00:17:43.436 "traddr": "10.0.0.1", 00:17:43.436 "trsvcid": "38442" 00:17:43.436 }, 00:17:43.436 "auth": { 00:17:43.436 "state": "completed", 00:17:43.436 "digest": "sha512", 00:17:43.436 "dhgroup": "ffdhe4096" 00:17:43.436 } 00:17:43.436 } 00:17:43.436 ]' 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.436 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.694 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:43.694 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:44.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.626 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.884 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.450 00:17:45.451 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.451 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.451 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.709 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.709 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.709 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.709 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.709 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.709 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.709 { 00:17:45.709 "cntlid": 125, 00:17:45.709 "qid": 0, 00:17:45.710 "state": "enabled", 00:17:45.710 "thread": "nvmf_tgt_poll_group_000", 00:17:45.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:45.710 "listen_address": { 00:17:45.710 "trtype": "TCP", 00:17:45.710 "adrfam": "IPv4", 00:17:45.710 "traddr": "10.0.0.2", 00:17:45.710 "trsvcid": "4420" 00:17:45.710 }, 00:17:45.710 "peer_address": { 00:17:45.710 "trtype": "TCP", 00:17:45.710 "adrfam": "IPv4", 00:17:45.710 "traddr": "10.0.0.1", 00:17:45.710 "trsvcid": "38476" 00:17:45.710 }, 00:17:45.710 "auth": { 00:17:45.710 "state": "completed", 00:17:45.710 "digest": "sha512", 00:17:45.710 "dhgroup": "ffdhe4096" 00:17:45.710 } 00:17:45.710 } 00:17:45.710 ]' 00:17:45.710 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.710 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.710 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.710 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.710 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.710 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.710 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.710 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.968 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:45.968 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:46.904 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.904 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:46.904 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.904 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.904 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.904 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.904 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.904 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.163 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.729 00:17:47.729 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.729 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.729 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.729 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.730 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.730 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.730 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.988 { 00:17:47.988 "cntlid": 127, 00:17:47.988 "qid": 0, 00:17:47.988 "state": "enabled", 00:17:47.988 "thread": "nvmf_tgt_poll_group_000", 00:17:47.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:47.988 "listen_address": { 00:17:47.988 "trtype": "TCP", 00:17:47.988 "adrfam": "IPv4", 00:17:47.988 "traddr": "10.0.0.2", 00:17:47.988 "trsvcid": "4420" 00:17:47.988 }, 00:17:47.988 "peer_address": { 00:17:47.988 "trtype": "TCP", 00:17:47.988 "adrfam": "IPv4", 00:17:47.988 "traddr": "10.0.0.1", 00:17:47.988 "trsvcid": "38510" 00:17:47.988 }, 00:17:47.988 "auth": { 00:17:47.988 "state": "completed", 00:17:47.988 "digest": "sha512", 00:17:47.988 "dhgroup": "ffdhe4096" 00:17:47.988 } 00:17:47.988 } 00:17:47.988 ]' 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.988 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.245 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:48.245 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.179 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.437 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.003 00:17:50.003 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.003 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.003 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.261 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.261 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.261 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.261 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.261 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.261 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.261 { 00:17:50.261 "cntlid": 129, 00:17:50.261 "qid": 0, 00:17:50.261 "state": "enabled", 00:17:50.261 "thread": "nvmf_tgt_poll_group_000", 00:17:50.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:50.261 "listen_address": { 00:17:50.261 "trtype": "TCP", 00:17:50.261 "adrfam": "IPv4", 00:17:50.261 "traddr": "10.0.0.2", 00:17:50.261 "trsvcid": "4420" 00:17:50.261 }, 00:17:50.261 "peer_address": { 00:17:50.261 "trtype": "TCP", 00:17:50.261 "adrfam": "IPv4", 00:17:50.261 "traddr": "10.0.0.1", 00:17:50.261 "trsvcid": "39528" 00:17:50.261 }, 00:17:50.261 "auth": { 00:17:50.261 "state": "completed", 00:17:50.261 "digest": "sha512", 00:17:50.261 "dhgroup": "ffdhe6144" 00:17:50.261 } 00:17:50.261 } 00:17:50.261 ]' 00:17:50.261 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.261 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.261 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.261 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.261 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.261 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.261 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.261 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.519 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:50.519 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:17:51.453 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.453 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:51.453 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.453 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.453 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.453 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.453 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.453 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.711 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:51.711 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.711 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.711 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:51.711 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.711 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.711 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.711 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.712 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.712 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.712 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.712 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.712 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.278 00:17:52.278 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.278 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.278 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.534 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.534 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.534 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.534 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.534 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.534 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.534 { 00:17:52.534 "cntlid": 131, 00:17:52.534 "qid": 0, 00:17:52.534 "state": "enabled", 00:17:52.534 "thread": "nvmf_tgt_poll_group_000", 00:17:52.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:52.534 "listen_address": { 00:17:52.534 "trtype": "TCP", 00:17:52.534 "adrfam": "IPv4", 00:17:52.534 "traddr": "10.0.0.2", 00:17:52.534 "trsvcid": "4420" 00:17:52.534 }, 00:17:52.534 "peer_address": { 00:17:52.534 "trtype": "TCP", 00:17:52.534 "adrfam": "IPv4", 00:17:52.534 "traddr": "10.0.0.1", 00:17:52.534 "trsvcid": "39568" 00:17:52.534 }, 00:17:52.534 "auth": { 00:17:52.534 "state": "completed", 00:17:52.535 "digest": "sha512", 00:17:52.535 "dhgroup": "ffdhe6144" 00:17:52.535 } 00:17:52.535 } 00:17:52.535 ]' 00:17:52.535 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.535 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.535 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.792 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.792 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.792 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.792 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.792 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.050 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:53.050 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:17:53.982 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.983 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.983 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.983 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.983 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.983 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.983 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.983 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.245 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.811 00:17:54.811 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.811 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.811 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.069 { 00:17:55.069 "cntlid": 133, 00:17:55.069 "qid": 0, 00:17:55.069 "state": "enabled", 00:17:55.069 "thread": "nvmf_tgt_poll_group_000", 00:17:55.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:55.069 "listen_address": { 00:17:55.069 "trtype": "TCP", 00:17:55.069 "adrfam": "IPv4", 00:17:55.069 "traddr": "10.0.0.2", 00:17:55.069 "trsvcid": "4420" 00:17:55.069 }, 00:17:55.069 "peer_address": { 00:17:55.069 "trtype": "TCP", 00:17:55.069 "adrfam": "IPv4", 00:17:55.069 "traddr": "10.0.0.1", 00:17:55.069 "trsvcid": "39596" 00:17:55.069 }, 00:17:55.069 "auth": { 00:17:55.069 "state": "completed", 00:17:55.069 "digest": "sha512", 00:17:55.069 "dhgroup": "ffdhe6144" 00:17:55.069 } 00:17:55.069 } 00:17:55.069 ]' 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.069 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.332 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:55.332 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:17:56.329 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.330 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:56.330 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.330 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.330 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.330 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.330 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.330 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.587 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.152 00:17:57.152 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.152 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.152 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.718 { 00:17:57.718 "cntlid": 135, 00:17:57.718 "qid": 0, 00:17:57.718 "state": "enabled", 00:17:57.718 "thread": "nvmf_tgt_poll_group_000", 00:17:57.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:57.718 "listen_address": { 00:17:57.718 "trtype": "TCP", 00:17:57.718 "adrfam": "IPv4", 00:17:57.718 "traddr": "10.0.0.2", 00:17:57.718 "trsvcid": "4420" 00:17:57.718 }, 00:17:57.718 "peer_address": { 00:17:57.718 "trtype": "TCP", 00:17:57.718 "adrfam": "IPv4", 00:17:57.718 "traddr": "10.0.0.1", 00:17:57.718 "trsvcid": "39620" 00:17:57.718 }, 00:17:57.718 "auth": { 00:17:57.718 "state": "completed", 00:17:57.718 "digest": "sha512", 00:17:57.718 "dhgroup": "ffdhe6144" 00:17:57.718 } 00:17:57.718 } 00:17:57.718 ]' 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.718 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.976 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:57.976 09:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.909 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.167 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.100 00:18:00.100 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.100 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.100 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.358 { 00:18:00.358 "cntlid": 137, 00:18:00.358 "qid": 0, 00:18:00.358 "state": "enabled", 00:18:00.358 "thread": "nvmf_tgt_poll_group_000", 00:18:00.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:00.358 "listen_address": { 00:18:00.358 "trtype": "TCP", 00:18:00.358 "adrfam": "IPv4", 00:18:00.358 "traddr": "10.0.0.2", 00:18:00.358 "trsvcid": "4420" 00:18:00.358 }, 00:18:00.358 "peer_address": { 00:18:00.358 "trtype": "TCP", 00:18:00.358 "adrfam": "IPv4", 00:18:00.358 "traddr": "10.0.0.1", 00:18:00.358 "trsvcid": "49574" 00:18:00.358 }, 00:18:00.358 "auth": { 00:18:00.358 "state": "completed", 00:18:00.358 "digest": "sha512", 00:18:00.358 "dhgroup": "ffdhe8192" 00:18:00.358 } 00:18:00.358 } 00:18:00.358 ]' 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.358 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.616 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:18:00.616 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:18:01.549 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.549 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:01.549 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.549 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.549 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.807 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.739 00:18:02.739 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.739 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.739 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.997 { 00:18:02.997 "cntlid": 139, 00:18:02.997 "qid": 0, 00:18:02.997 "state": "enabled", 00:18:02.997 "thread": "nvmf_tgt_poll_group_000", 00:18:02.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:02.997 "listen_address": { 00:18:02.997 "trtype": "TCP", 00:18:02.997 "adrfam": "IPv4", 00:18:02.997 "traddr": "10.0.0.2", 00:18:02.997 "trsvcid": "4420" 00:18:02.997 }, 00:18:02.997 "peer_address": { 00:18:02.997 "trtype": "TCP", 00:18:02.997 "adrfam": "IPv4", 00:18:02.997 "traddr": "10.0.0.1", 00:18:02.997 "trsvcid": "49590" 00:18:02.997 }, 00:18:02.997 "auth": { 00:18:02.997 "state": "completed", 00:18:02.997 "digest": "sha512", 00:18:02.997 "dhgroup": "ffdhe8192" 00:18:02.997 } 00:18:02.997 } 00:18:02.997 ]' 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.997 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.563 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:18:03.563 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: --dhchap-ctrl-secret DHHC-1:02:N2I0OGMwNThhOTVhZDFmZDQxNDM0ZWU3YmU5NjBkYzhlNjNiNmE1Nzk1YzYwZDM3zfnzIg==: 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.496 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.428 00:18:05.428 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.428 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.428 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.685 { 00:18:05.685 "cntlid": 141, 00:18:05.685 "qid": 0, 00:18:05.685 "state": "enabled", 00:18:05.685 "thread": "nvmf_tgt_poll_group_000", 00:18:05.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:05.685 "listen_address": { 00:18:05.685 "trtype": "TCP", 00:18:05.685 "adrfam": "IPv4", 00:18:05.685 "traddr": "10.0.0.2", 00:18:05.685 "trsvcid": "4420" 00:18:05.685 }, 00:18:05.685 "peer_address": { 00:18:05.685 "trtype": "TCP", 00:18:05.685 "adrfam": "IPv4", 00:18:05.685 "traddr": "10.0.0.1", 00:18:05.685 "trsvcid": "49616" 00:18:05.685 }, 00:18:05.685 "auth": { 00:18:05.685 "state": "completed", 00:18:05.685 "digest": "sha512", 00:18:05.685 "dhgroup": "ffdhe8192" 00:18:05.685 } 00:18:05.685 } 00:18:05.685 ]' 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.685 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.942 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.942 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.942 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.942 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.942 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.200 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:18:06.200 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:01:Y2EwNmFhYTRiZTM0NjVlZjk1OTYwOWE5NGViMDdiZTIZ3qj1: 00:18:07.131 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.131 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:07.131 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.131 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.131 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.131 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.131 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.131 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.388 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.321 00:18:08.321 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.321 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.321 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.321 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.321 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.321 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.321 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.321 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.321 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.321 { 00:18:08.321 "cntlid": 143, 00:18:08.321 "qid": 0, 00:18:08.321 "state": "enabled", 00:18:08.321 "thread": "nvmf_tgt_poll_group_000", 00:18:08.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:08.321 "listen_address": { 00:18:08.321 "trtype": "TCP", 00:18:08.321 "adrfam": "IPv4", 00:18:08.321 "traddr": "10.0.0.2", 00:18:08.321 "trsvcid": "4420" 00:18:08.321 }, 00:18:08.321 "peer_address": { 00:18:08.321 "trtype": "TCP", 00:18:08.321 "adrfam": "IPv4", 00:18:08.321 "traddr": "10.0.0.1", 00:18:08.321 "trsvcid": "49644" 00:18:08.321 }, 00:18:08.321 "auth": { 00:18:08.321 "state": "completed", 00:18:08.321 "digest": "sha512", 00:18:08.321 "dhgroup": "ffdhe8192" 00:18:08.321 } 00:18:08.321 } 00:18:08.321 ]' 00:18:08.321 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.579 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.579 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.579 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.579 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.579 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.579 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.579 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.837 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:18:08.837 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.770 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.028 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.961 00:18:10.961 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.961 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.961 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.219 { 00:18:11.219 "cntlid": 145, 00:18:11.219 "qid": 0, 00:18:11.219 "state": "enabled", 00:18:11.219 "thread": "nvmf_tgt_poll_group_000", 00:18:11.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:11.219 "listen_address": { 00:18:11.219 "trtype": "TCP", 00:18:11.219 "adrfam": "IPv4", 00:18:11.219 "traddr": "10.0.0.2", 00:18:11.219 "trsvcid": "4420" 00:18:11.219 }, 00:18:11.219 "peer_address": { 00:18:11.219 "trtype": "TCP", 00:18:11.219 "adrfam": "IPv4", 00:18:11.219 "traddr": "10.0.0.1", 00:18:11.219 "trsvcid": "38272" 00:18:11.219 }, 00:18:11.219 "auth": { 00:18:11.219 "state": "completed", 00:18:11.219 "digest": "sha512", 00:18:11.219 "dhgroup": "ffdhe8192" 00:18:11.219 } 00:18:11.219 } 00:18:11.219 ]' 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.219 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.219 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.219 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.219 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.219 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.219 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.476 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:18:11.476 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZGIxOThhNTM3YmMyOTM3MTRiMTkwM2ZiZTViMDY4NmRiZDM3NDg4MTkwYjczY2UwR2T/cg==: --dhchap-ctrl-secret DHHC-1:03:MWI2NWFjMjdkMTJhNDcwZjUwNzVjZjM0MzJjNTdlNzAzZGQwZWJlYjYyZmZmMDUwMjg1MjFkZjQ1MDVmNzFlZaT20rw=: 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:12.409 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:13.341 request: 00:18:13.341 { 00:18:13.341 "name": "nvme0", 00:18:13.341 "trtype": "tcp", 00:18:13.341 "traddr": "10.0.0.2", 00:18:13.341 "adrfam": "ipv4", 00:18:13.341 "trsvcid": "4420", 00:18:13.341 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:13.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:13.341 "prchk_reftag": false, 00:18:13.341 "prchk_guard": false, 00:18:13.341 "hdgst": false, 00:18:13.341 "ddgst": false, 00:18:13.341 "dhchap_key": "key2", 00:18:13.341 "allow_unrecognized_csi": false, 00:18:13.341 "method": "bdev_nvme_attach_controller", 00:18:13.341 "req_id": 1 00:18:13.341 } 00:18:13.341 Got JSON-RPC error response 00:18:13.341 response: 00:18:13.341 { 00:18:13.341 "code": -5, 00:18:13.341 "message": "Input/output error" 00:18:13.341 } 00:18:13.341 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:13.341 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.341 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.341 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.341 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:13.342 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.274 request: 00:18:14.274 { 00:18:14.274 "name": "nvme0", 00:18:14.274 "trtype": "tcp", 00:18:14.274 "traddr": "10.0.0.2", 00:18:14.274 "adrfam": "ipv4", 00:18:14.274 "trsvcid": "4420", 00:18:14.274 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:14.274 "prchk_reftag": false, 00:18:14.274 "prchk_guard": false, 00:18:14.274 "hdgst": false, 00:18:14.274 "ddgst": false, 00:18:14.274 "dhchap_key": "key1", 00:18:14.274 "dhchap_ctrlr_key": "ckey2", 00:18:14.274 "allow_unrecognized_csi": false, 00:18:14.274 "method": "bdev_nvme_attach_controller", 00:18:14.274 "req_id": 1 00:18:14.274 } 00:18:14.274 Got JSON-RPC error response 00:18:14.274 response: 00:18:14.274 { 00:18:14.274 "code": -5, 00:18:14.274 "message": "Input/output error" 00:18:14.274 } 00:18:14.274 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:14.274 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.274 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.274 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.274 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:14.274 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.275 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.902 request: 00:18:14.902 { 00:18:14.902 "name": "nvme0", 00:18:14.902 "trtype": "tcp", 00:18:14.902 "traddr": "10.0.0.2", 00:18:14.902 "adrfam": "ipv4", 00:18:14.902 "trsvcid": "4420", 00:18:14.902 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:14.902 "prchk_reftag": false, 00:18:14.902 "prchk_guard": false, 00:18:14.902 "hdgst": false, 00:18:14.902 "ddgst": false, 00:18:14.902 "dhchap_key": "key1", 00:18:14.902 "dhchap_ctrlr_key": "ckey1", 00:18:14.902 "allow_unrecognized_csi": false, 00:18:14.902 "method": "bdev_nvme_attach_controller", 00:18:14.902 "req_id": 1 00:18:14.902 } 00:18:14.902 Got JSON-RPC error response 00:18:14.902 response: 00:18:14.902 { 00:18:14.902 "code": -5, 00:18:14.902 "message": "Input/output error" 00:18:14.902 } 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3723634 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3723634 ']' 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3723634 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723634 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723634' 00:18:14.902 killing process with pid 3723634 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3723634 00:18:14.902 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3723634 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3746526 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3746526 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3746526 ']' 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.159 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.416 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.416 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3746526 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3746526 ']' 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.417 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.675 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.675 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:15.675 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:15.675 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.675 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.931 null0 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ob2 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ZeY ]] 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZeY 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RmI 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.931 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.BDt ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BDt 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kd8 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.7r0 ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7r0 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.amK 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.932 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.303 nvme0n1 00:18:17.303 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.303 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.303 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.562 { 00:18:17.562 "cntlid": 1, 00:18:17.562 "qid": 0, 00:18:17.562 "state": "enabled", 00:18:17.562 "thread": "nvmf_tgt_poll_group_000", 00:18:17.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:17.562 "listen_address": { 00:18:17.562 "trtype": "TCP", 00:18:17.562 "adrfam": "IPv4", 00:18:17.562 "traddr": "10.0.0.2", 00:18:17.562 "trsvcid": "4420" 00:18:17.562 }, 00:18:17.562 "peer_address": { 00:18:17.562 "trtype": "TCP", 00:18:17.562 "adrfam": "IPv4", 00:18:17.562 "traddr": "10.0.0.1", 00:18:17.562 "trsvcid": "38342" 00:18:17.562 }, 00:18:17.562 "auth": { 00:18:17.562 "state": "completed", 00:18:17.562 "digest": "sha512", 00:18:17.562 "dhgroup": "ffdhe8192" 00:18:17.562 } 00:18:17.562 } 00:18:17.562 ]' 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.562 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.820 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.820 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.820 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.078 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:18:18.078 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:19.011 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.268 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.269 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.525 request: 00:18:19.525 { 00:18:19.525 "name": "nvme0", 00:18:19.525 "trtype": "tcp", 00:18:19.525 "traddr": "10.0.0.2", 00:18:19.525 "adrfam": "ipv4", 00:18:19.525 "trsvcid": "4420", 00:18:19.525 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:19.525 "prchk_reftag": false, 00:18:19.525 "prchk_guard": false, 00:18:19.525 "hdgst": false, 00:18:19.525 "ddgst": false, 00:18:19.525 "dhchap_key": "key3", 00:18:19.525 "allow_unrecognized_csi": false, 00:18:19.525 "method": "bdev_nvme_attach_controller", 00:18:19.525 "req_id": 1 00:18:19.525 } 00:18:19.525 Got JSON-RPC error response 00:18:19.525 response: 00:18:19.525 { 00:18:19.525 "code": -5, 00:18:19.525 "message": "Input/output error" 00:18:19.525 } 00:18:19.525 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:19.525 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.525 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.525 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.525 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:19.525 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:19.525 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:19.525 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.783 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.041 request: 00:18:20.041 { 00:18:20.041 "name": "nvme0", 00:18:20.041 "trtype": "tcp", 00:18:20.041 "traddr": "10.0.0.2", 00:18:20.041 "adrfam": "ipv4", 00:18:20.041 "trsvcid": "4420", 00:18:20.041 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:20.042 "prchk_reftag": false, 00:18:20.042 "prchk_guard": false, 00:18:20.042 "hdgst": false, 00:18:20.042 "ddgst": false, 00:18:20.042 "dhchap_key": "key3", 00:18:20.042 "allow_unrecognized_csi": false, 00:18:20.042 "method": "bdev_nvme_attach_controller", 00:18:20.042 "req_id": 1 00:18:20.042 } 00:18:20.042 Got JSON-RPC error response 00:18:20.042 response: 00:18:20.042 { 00:18:20.042 "code": -5, 00:18:20.042 "message": "Input/output error" 00:18:20.042 } 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.042 09:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.300 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.301 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.301 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.866 request: 00:18:20.866 { 00:18:20.866 "name": "nvme0", 00:18:20.866 "trtype": "tcp", 00:18:20.866 "traddr": "10.0.0.2", 00:18:20.866 "adrfam": "ipv4", 00:18:20.866 "trsvcid": "4420", 00:18:20.866 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:20.866 "prchk_reftag": false, 00:18:20.866 "prchk_guard": false, 00:18:20.866 "hdgst": false, 00:18:20.866 "ddgst": false, 00:18:20.866 "dhchap_key": "key0", 00:18:20.866 "dhchap_ctrlr_key": "key1", 00:18:20.866 "allow_unrecognized_csi": false, 00:18:20.866 "method": "bdev_nvme_attach_controller", 00:18:20.866 "req_id": 1 00:18:20.866 } 00:18:20.866 Got JSON-RPC error response 00:18:20.866 response: 00:18:20.866 { 00:18:20.866 "code": -5, 00:18:20.866 "message": "Input/output error" 00:18:20.866 } 00:18:20.866 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:20.866 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.866 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.866 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.866 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:20.866 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:20.866 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:21.432 nvme0n1 00:18:21.432 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:21.432 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.432 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:21.690 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.690 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.690 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.948 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:21.948 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.948 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.948 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.948 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:21.948 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:21.948 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:23.319 nvme0n1 00:18:23.319 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:23.319 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:23.319 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.577 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.577 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:23.577 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.577 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.577 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.577 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:23.577 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:23.577 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.835 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.835 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:18:23.835 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: --dhchap-ctrl-secret DHHC-1:03:YmJjOGMwM2U0MWE3ZmI2M2E4ZDY0Yzk0OGMyNWEwY2NiMjFhNzBiMWIzYjY1ZGJhYzNiZmM4MTYzMGRjZDMwOXK7mn0=: 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.766 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:25.023 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:25.955 request: 00:18:25.956 { 00:18:25.956 "name": "nvme0", 00:18:25.956 "trtype": "tcp", 00:18:25.956 "traddr": "10.0.0.2", 00:18:25.956 "adrfam": "ipv4", 00:18:25.956 "trsvcid": "4420", 00:18:25.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:25.956 "prchk_reftag": false, 00:18:25.956 "prchk_guard": false, 00:18:25.956 "hdgst": false, 00:18:25.956 "ddgst": false, 00:18:25.956 "dhchap_key": "key1", 00:18:25.956 "allow_unrecognized_csi": false, 00:18:25.956 "method": "bdev_nvme_attach_controller", 00:18:25.956 "req_id": 1 00:18:25.956 } 00:18:25.956 Got JSON-RPC error response 00:18:25.956 response: 00:18:25.956 { 00:18:25.956 "code": -5, 00:18:25.956 "message": "Input/output error" 00:18:25.956 } 00:18:25.956 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:25.956 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.956 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.956 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.956 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.956 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.956 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.382 nvme0n1 00:18:27.382 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:27.382 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:27.382 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.382 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.382 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.382 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.640 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.640 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.640 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.640 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.640 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:27.640 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:27.641 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:27.898 nvme0n1 00:18:27.898 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:27.898 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.898 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:28.155 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.155 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.155 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: '' 2s 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: ]] 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NjAwMzA0MTk0NjY5ZmY1ZmJlYzllNmIzNWNjZWFlM2NWsoqS: 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:28.720 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:30.616 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:30.616 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:30.616 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:30.616 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:30.616 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:30.616 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: 2s 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: ]] 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZjYwOTY0YzFhNGJiNTM0NWIyMjcxNmQ4OWUzMDYwZmZlNGZhZjUzNzY3ODU0YWUyPsQ4YQ==: 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:30.617 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:32.525 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:32.525 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:32.525 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:32.525 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:32.525 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:32.525 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:32.525 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:32.525 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.782 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.782 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.782 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.782 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.782 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:32.782 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:32.782 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:34.154 nvme0n1 00:18:34.154 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.154 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.154 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.154 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.154 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.154 09:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.720 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:34.720 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:34.720 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.979 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.979 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:34.979 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.979 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.979 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.979 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:34.979 09:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:35.544 09:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.476 request: 00:18:36.476 { 00:18:36.476 "name": "nvme0", 00:18:36.476 "dhchap_key": "key1", 00:18:36.476 "dhchap_ctrlr_key": "key3", 00:18:36.476 "method": "bdev_nvme_set_keys", 00:18:36.476 "req_id": 1 00:18:36.476 } 00:18:36.476 Got JSON-RPC error response 00:18:36.476 response: 00:18:36.476 { 00:18:36.476 "code": -13, 00:18:36.476 "message": "Permission denied" 00:18:36.476 } 00:18:36.476 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:36.476 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.476 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.476 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.476 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:36.476 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:36.476 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.733 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:36.733 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:37.663 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:37.663 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:37.663 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.920 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:37.920 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.920 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.920 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.920 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.920 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.920 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.920 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.290 nvme0n1 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.290 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:40.221 request: 00:18:40.221 { 00:18:40.221 "name": "nvme0", 00:18:40.221 "dhchap_key": "key2", 00:18:40.221 "dhchap_ctrlr_key": "key0", 00:18:40.222 "method": "bdev_nvme_set_keys", 00:18:40.222 "req_id": 1 00:18:40.222 } 00:18:40.222 Got JSON-RPC error response 00:18:40.222 response: 00:18:40.222 { 00:18:40.222 "code": -13, 00:18:40.222 "message": "Permission denied" 00:18:40.222 } 00:18:40.222 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:40.222 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.222 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.222 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.222 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:40.222 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:40.222 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.479 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:40.479 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:41.412 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:41.412 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:41.412 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3723655 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3723655 ']' 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3723655 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.670 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723655 00:18:41.927 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.927 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.927 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723655' 00:18:41.928 killing process with pid 3723655 00:18:41.928 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3723655 00:18:41.928 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3723655 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.185 rmmod nvme_tcp 00:18:42.185 rmmod nvme_fabrics 00:18:42.185 rmmod nvme_keyring 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3746526 ']' 00:18:42.185 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3746526 00:18:42.186 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3746526 ']' 00:18:42.186 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3746526 00:18:42.186 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:42.186 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.186 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3746526 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3746526' 00:18:42.444 killing process with pid 3746526 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3746526 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3746526 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.444 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.979 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:44.979 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ob2 /tmp/spdk.key-sha256.RmI /tmp/spdk.key-sha384.kd8 /tmp/spdk.key-sha512.amK /tmp/spdk.key-sha512.ZeY /tmp/spdk.key-sha384.BDt /tmp/spdk.key-sha256.7r0 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:44.979 00:18:44.979 real 3m31.671s 00:18:44.979 user 8m17.083s 00:18:44.980 sys 0m27.883s 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.980 ************************************ 00:18:44.980 END TEST nvmf_auth_target 00:18:44.980 ************************************ 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.980 ************************************ 00:18:44.980 START TEST nvmf_bdevio_no_huge 00:18:44.980 ************************************ 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:44.980 * Looking for test storage... 00:18:44.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.980 --rc genhtml_branch_coverage=1 00:18:44.980 --rc genhtml_function_coverage=1 00:18:44.980 --rc genhtml_legend=1 00:18:44.980 --rc geninfo_all_blocks=1 00:18:44.980 --rc geninfo_unexecuted_blocks=1 00:18:44.980 00:18:44.980 ' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.980 --rc genhtml_branch_coverage=1 00:18:44.980 --rc genhtml_function_coverage=1 00:18:44.980 --rc genhtml_legend=1 00:18:44.980 --rc geninfo_all_blocks=1 00:18:44.980 --rc geninfo_unexecuted_blocks=1 00:18:44.980 00:18:44.980 ' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.980 --rc genhtml_branch_coverage=1 00:18:44.980 --rc genhtml_function_coverage=1 00:18:44.980 --rc genhtml_legend=1 00:18:44.980 --rc geninfo_all_blocks=1 00:18:44.980 --rc geninfo_unexecuted_blocks=1 00:18:44.980 00:18:44.980 ' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.980 --rc genhtml_branch_coverage=1 00:18:44.980 --rc genhtml_function_coverage=1 00:18:44.980 --rc genhtml_legend=1 00:18:44.980 --rc geninfo_all_blocks=1 00:18:44.980 --rc geninfo_unexecuted_blocks=1 00:18:44.980 00:18:44.980 ' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.980 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.981 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:46.883 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:46.883 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:46.883 Found net devices under 0000:09:00.0: cvl_0_0 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:46.883 Found net devices under 0000:09:00.1: cvl_0_1 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.883 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.884 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:47.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:18:47.141 00:18:47.141 --- 10.0.0.2 ping statistics --- 00:18:47.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.141 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:18:47.141 00:18:47.141 --- 10.0.0.1 ping statistics --- 00:18:47.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.141 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3751776 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3751776 00:18:47.141 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3751776 ']' 00:18:47.142 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.142 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.142 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.142 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.142 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.142 [2024-11-20 09:52:23.943568] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:18:47.142 [2024-11-20 09:52:23.943675] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:47.142 [2024-11-20 09:52:24.023577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.399 [2024-11-20 09:52:24.083843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.399 [2024-11-20 09:52:24.083907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.399 [2024-11-20 09:52:24.083936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.399 [2024-11-20 09:52:24.083948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.399 [2024-11-20 09:52:24.083957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.399 [2024-11-20 09:52:24.085026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:47.399 [2024-11-20 09:52:24.085101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:47.399 [2024-11-20 09:52:24.085053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:47.399 [2024-11-20 09:52:24.085103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.399 [2024-11-20 09:52:24.251174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.399 Malloc0 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.399 [2024-11-20 09:52:24.291521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:47.399 { 00:18:47.399 "params": { 00:18:47.399 "name": "Nvme$subsystem", 00:18:47.399 "trtype": "$TEST_TRANSPORT", 00:18:47.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.399 "adrfam": "ipv4", 00:18:47.399 "trsvcid": "$NVMF_PORT", 00:18:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.399 "hdgst": ${hdgst:-false}, 00:18:47.399 "ddgst": ${ddgst:-false} 00:18:47.399 }, 00:18:47.399 "method": "bdev_nvme_attach_controller" 00:18:47.399 } 00:18:47.399 EOF 00:18:47.399 )") 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:47.399 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:47.399 "params": { 00:18:47.399 "name": "Nvme1", 00:18:47.399 "trtype": "tcp", 00:18:47.399 "traddr": "10.0.0.2", 00:18:47.399 "adrfam": "ipv4", 00:18:47.399 "trsvcid": "4420", 00:18:47.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.400 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.400 "hdgst": false, 00:18:47.400 "ddgst": false 00:18:47.400 }, 00:18:47.400 "method": "bdev_nvme_attach_controller" 00:18:47.400 }' 00:18:47.657 [2024-11-20 09:52:24.341814] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:18:47.657 [2024-11-20 09:52:24.341891] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3751808 ] 00:18:47.657 [2024-11-20 09:52:24.415978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:47.657 [2024-11-20 09:52:24.482089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.657 [2024-11-20 09:52:24.482142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.657 [2024-11-20 09:52:24.482145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.914 I/O targets: 00:18:47.914 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:47.914 00:18:47.914 00:18:47.914 CUnit - A unit testing framework for C - Version 2.1-3 00:18:47.914 http://cunit.sourceforge.net/ 00:18:47.914 00:18:47.914 00:18:47.914 Suite: bdevio tests on: Nvme1n1 00:18:47.914 Test: blockdev write read block ...passed 00:18:47.914 Test: blockdev write zeroes read block ...passed 00:18:47.914 Test: blockdev write zeroes read no split ...passed 00:18:47.914 Test: blockdev write zeroes read split ...passed 00:18:48.172 Test: blockdev write zeroes read split partial ...passed 00:18:48.172 Test: blockdev reset ...[2024-11-20 09:52:24.835723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:48.172 [2024-11-20 09:52:24.835832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e26e0 (9): Bad file descriptor 00:18:48.172 [2024-11-20 09:52:24.893868] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:48.172 passed 00:18:48.172 Test: blockdev write read 8 blocks ...passed 00:18:48.172 Test: blockdev write read size > 128k ...passed 00:18:48.172 Test: blockdev write read invalid size ...passed 00:18:48.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:48.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:48.172 Test: blockdev write read max offset ...passed 00:18:48.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:48.172 Test: blockdev writev readv 8 blocks ...passed 00:18:48.172 Test: blockdev writev readv 30 x 1block ...passed 00:18:48.431 Test: blockdev writev readv block ...passed 00:18:48.431 Test: blockdev writev readv size > 128k ...passed 00:18:48.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:48.431 Test: blockdev comparev and writev ...[2024-11-20 09:52:25.105413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.431 [2024-11-20 09:52:25.105453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.105477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.431 [2024-11-20 09:52:25.105495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.105802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.431 [2024-11-20 09:52:25.105826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.105849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.431 [2024-11-20 09:52:25.105865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.106171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.431 [2024-11-20 09:52:25.106195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.106216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.431 [2024-11-20 09:52:25.106232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.106554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.431 [2024-11-20 09:52:25.106578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.106599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.431 [2024-11-20 09:52:25.106615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:48.431 passed 00:18:48.431 Test: blockdev nvme passthru rw ...passed 00:18:48.431 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:52:25.188526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:48.431 [2024-11-20 09:52:25.188559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.188701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:48.431 [2024-11-20 09:52:25.188724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.188856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:48.431 [2024-11-20 09:52:25.188879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:48.431 [2024-11-20 09:52:25.189012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:48.431 [2024-11-20 09:52:25.189035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:48.431 passed 00:18:48.431 Test: blockdev nvme admin passthru ...passed 00:18:48.431 Test: blockdev copy ...passed 00:18:48.431 00:18:48.431 Run Summary: Type Total Ran Passed Failed Inactive 00:18:48.431 suites 1 1 n/a 0 0 00:18:48.431 tests 23 23 23 0 0 00:18:48.431 asserts 152 152 152 0 n/a 00:18:48.431 00:18:48.431 Elapsed time = 1.071 seconds 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:48.690 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:48.690 rmmod nvme_tcp 00:18:48.948 rmmod nvme_fabrics 00:18:48.948 rmmod nvme_keyring 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3751776 ']' 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3751776 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3751776 ']' 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3751776 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3751776 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3751776' 00:18:48.948 killing process with pid 3751776 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3751776 00:18:48.948 09:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3751776 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.207 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:51.742 00:18:51.742 real 0m6.681s 00:18:51.742 user 0m10.620s 00:18:51.742 sys 0m2.632s 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.742 ************************************ 00:18:51.742 END TEST nvmf_bdevio_no_huge 00:18:51.742 ************************************ 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.742 ************************************ 00:18:51.742 START TEST nvmf_tls 00:18:51.742 ************************************ 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:51.742 * Looking for test storage... 00:18:51.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:51.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.742 --rc genhtml_branch_coverage=1 00:18:51.742 --rc genhtml_function_coverage=1 00:18:51.742 --rc genhtml_legend=1 00:18:51.742 --rc geninfo_all_blocks=1 00:18:51.742 --rc geninfo_unexecuted_blocks=1 00:18:51.742 00:18:51.742 ' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:51.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.742 --rc genhtml_branch_coverage=1 00:18:51.742 --rc genhtml_function_coverage=1 00:18:51.742 --rc genhtml_legend=1 00:18:51.742 --rc geninfo_all_blocks=1 00:18:51.742 --rc geninfo_unexecuted_blocks=1 00:18:51.742 00:18:51.742 ' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:51.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.742 --rc genhtml_branch_coverage=1 00:18:51.742 --rc genhtml_function_coverage=1 00:18:51.742 --rc genhtml_legend=1 00:18:51.742 --rc geninfo_all_blocks=1 00:18:51.742 --rc geninfo_unexecuted_blocks=1 00:18:51.742 00:18:51.742 ' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:51.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.742 --rc genhtml_branch_coverage=1 00:18:51.742 --rc genhtml_function_coverage=1 00:18:51.742 --rc genhtml_legend=1 00:18:51.742 --rc geninfo_all_blocks=1 00:18:51.742 --rc geninfo_unexecuted_blocks=1 00:18:51.742 00:18:51.742 ' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.742 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:51.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:51.743 09:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.646 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.646 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:53.647 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:53.647 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:53.647 Found net devices under 0000:09:00.0: cvl_0_0 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:53.647 Found net devices under 0000:09:00.1: cvl_0_1 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.647 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:53.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:18:53.906 00:18:53.906 --- 10.0.0.2 ping statistics --- 00:18:53.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.906 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:18:53.906 00:18:53.906 --- 10.0.0.1 ping statistics --- 00:18:53.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.906 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3754008 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3754008 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3754008 ']' 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.906 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.906 [2024-11-20 09:52:30.691586] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:18:53.906 [2024-11-20 09:52:30.691683] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.906 [2024-11-20 09:52:30.766758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.164 [2024-11-20 09:52:30.824226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.164 [2024-11-20 09:52:30.824298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.164 [2024-11-20 09:52:30.824320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.164 [2024-11-20 09:52:30.824331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.164 [2024-11-20 09:52:30.824341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.164 [2024-11-20 09:52:30.824993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.164 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.164 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.164 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.164 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.164 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.164 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.164 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:54.164 09:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:54.422 true 00:18:54.422 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.422 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:54.679 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:54.679 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:54.680 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:54.938 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.938 09:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:55.196 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:55.196 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:55.196 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:55.453 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:55.453 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:56.019 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:56.019 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:56.019 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:56.019 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.019 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:56.019 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:56.019 09:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:56.277 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.277 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:56.841 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:56.841 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:56.841 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:56.841 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.842 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:57.099 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:57.099 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:57.099 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:57.099 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:57.100 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.100 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:57.100 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:57.100 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:57.100 09:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:57.357 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:57.357 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:57.357 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:57.357 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.357 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:57.357 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.odUnBvpkrd 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.REbmCQLKZt 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.odUnBvpkrd 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.REbmCQLKZt 00:18:57.358 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:57.615 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:58.182 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.odUnBvpkrd 00:18:58.182 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.odUnBvpkrd 00:18:58.182 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:58.440 [2024-11-20 09:52:35.147086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.440 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:58.697 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:58.955 [2024-11-20 09:52:35.680633] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.955 [2024-11-20 09:52:35.680847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.955 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:59.213 malloc0 00:18:59.213 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:59.473 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.odUnBvpkrd 00:18:59.765 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:00.048 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.odUnBvpkrd 00:19:10.015 Initializing NVMe Controllers 00:19:10.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:10.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:10.015 Initialization complete. Launching workers. 00:19:10.015 ======================================================== 00:19:10.015 Latency(us) 00:19:10.015 Device Information : IOPS MiB/s Average min max 00:19:10.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8673.08 33.88 7381.23 1146.89 8890.08 00:19:10.015 ======================================================== 00:19:10.015 Total : 8673.08 33.88 7381.23 1146.89 8890.08 00:19:10.015 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.odUnBvpkrd 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.odUnBvpkrd 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3755916 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3755916 /var/tmp/bdevperf.sock 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3755916 ']' 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.015 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.274 [2024-11-20 09:52:46.935145] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:10.274 [2024-11-20 09:52:46.935243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755916 ] 00:19:10.274 [2024-11-20 09:52:47.002007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.274 [2024-11-20 09:52:47.060656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.274 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.274 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:10.274 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.odUnBvpkrd 00:19:10.842 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.842 [2024-11-20 09:52:47.752389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.100 TLSTESTn1 00:19:11.100 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:11.100 Running I/O for 10 seconds... 00:19:13.404 3312.00 IOPS, 12.94 MiB/s [2024-11-20T08:52:51.252Z] 3354.50 IOPS, 13.10 MiB/s [2024-11-20T08:52:52.186Z] 3435.67 IOPS, 13.42 MiB/s [2024-11-20T08:52:53.120Z] 3470.00 IOPS, 13.55 MiB/s [2024-11-20T08:52:54.053Z] 3497.00 IOPS, 13.66 MiB/s [2024-11-20T08:52:54.986Z] 3515.17 IOPS, 13.73 MiB/s [2024-11-20T08:52:56.357Z] 3510.00 IOPS, 13.71 MiB/s [2024-11-20T08:52:57.289Z] 3524.12 IOPS, 13.77 MiB/s [2024-11-20T08:52:58.222Z] 3524.00 IOPS, 13.77 MiB/s [2024-11-20T08:52:58.222Z] 3536.60 IOPS, 13.81 MiB/s 00:19:21.308 Latency(us) 00:19:21.308 [2024-11-20T08:52:58.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.308 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.308 Verification LBA range: start 0x0 length 0x2000 00:19:21.308 TLSTESTn1 : 10.02 3541.80 13.84 0.00 0.00 36076.28 8349.77 46603.38 00:19:21.308 [2024-11-20T08:52:58.222Z] =================================================================================================================== 00:19:21.308 [2024-11-20T08:52:58.222Z] Total : 3541.80 13.84 0.00 0.00 36076.28 8349.77 46603.38 00:19:21.308 { 00:19:21.308 "results": [ 00:19:21.308 { 00:19:21.308 "job": "TLSTESTn1", 00:19:21.308 "core_mask": "0x4", 00:19:21.308 "workload": "verify", 00:19:21.308 "status": "finished", 00:19:21.308 "verify_range": { 00:19:21.308 "start": 0, 00:19:21.308 "length": 8192 00:19:21.308 }, 00:19:21.308 "queue_depth": 128, 00:19:21.308 "io_size": 4096, 00:19:21.308 "runtime": 10.020597, 00:19:21.308 "iops": 3541.80494435611, 00:19:21.308 "mibps": 13.835175563891054, 00:19:21.308 "io_failed": 0, 00:19:21.308 "io_timeout": 0, 00:19:21.308 "avg_latency_us": 36076.27961918357, 00:19:21.308 "min_latency_us": 8349.771851851852, 00:19:21.308 "max_latency_us": 46603.37777777778 00:19:21.308 } 00:19:21.308 ], 00:19:21.308 "core_count": 1 00:19:21.308 } 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3755916 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3755916 ']' 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3755916 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3755916 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3755916' 00:19:21.308 killing process with pid 3755916 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3755916 00:19:21.308 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.308 00:19:21.308 Latency(us) 00:19:21.308 [2024-11-20T08:52:58.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.308 [2024-11-20T08:52:58.222Z] =================================================================================================================== 00:19:21.308 [2024-11-20T08:52:58.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.308 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3755916 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.REbmCQLKZt 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.REbmCQLKZt 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.REbmCQLKZt 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.REbmCQLKZt 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3757233 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3757233 /var/tmp/bdevperf.sock 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3757233 ']' 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.567 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.567 [2024-11-20 09:52:58.345083] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:21.567 [2024-11-20 09:52:58.345168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757233 ] 00:19:21.567 [2024-11-20 09:52:58.410139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.567 [2024-11-20 09:52:58.466806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.824 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.824 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:21.824 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.REbmCQLKZt 00:19:22.081 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:22.339 [2024-11-20 09:52:59.114328] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.339 [2024-11-20 09:52:59.125233] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:22.339 [2024-11-20 09:52:59.125616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247f2c0 (107): Transport endpoint is not connected 00:19:22.339 [2024-11-20 09:52:59.126606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247f2c0 (9): Bad file descriptor 00:19:22.339 [2024-11-20 09:52:59.127621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:22.339 [2024-11-20 09:52:59.127648] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:22.339 [2024-11-20 09:52:59.127662] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:22.339 [2024-11-20 09:52:59.127681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:22.339 request: 00:19:22.339 { 00:19:22.339 "name": "TLSTEST", 00:19:22.339 "trtype": "tcp", 00:19:22.339 "traddr": "10.0.0.2", 00:19:22.339 "adrfam": "ipv4", 00:19:22.339 "trsvcid": "4420", 00:19:22.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.339 "prchk_reftag": false, 00:19:22.339 "prchk_guard": false, 00:19:22.339 "hdgst": false, 00:19:22.339 "ddgst": false, 00:19:22.339 "psk": "key0", 00:19:22.339 "allow_unrecognized_csi": false, 00:19:22.339 "method": "bdev_nvme_attach_controller", 00:19:22.339 "req_id": 1 00:19:22.339 } 00:19:22.339 Got JSON-RPC error response 00:19:22.339 response: 00:19:22.339 { 00:19:22.339 "code": -5, 00:19:22.339 "message": "Input/output error" 00:19:22.339 } 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3757233 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3757233 ']' 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3757233 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757233 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757233' 00:19:22.339 killing process with pid 3757233 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3757233 00:19:22.339 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.339 00:19:22.339 Latency(us) 00:19:22.339 [2024-11-20T08:52:59.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.339 [2024-11-20T08:52:59.253Z] =================================================================================================================== 00:19:22.339 [2024-11-20T08:52:59.253Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.339 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3757233 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.odUnBvpkrd 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.odUnBvpkrd 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.odUnBvpkrd 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.odUnBvpkrd 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3757375 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3757375 /var/tmp/bdevperf.sock 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3757375 ']' 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.598 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.598 [2024-11-20 09:52:59.460482] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:22.598 [2024-11-20 09:52:59.460586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757375 ] 00:19:22.857 [2024-11-20 09:52:59.527629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.857 [2024-11-20 09:52:59.584825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.857 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.857 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.857 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.odUnBvpkrd 00:19:23.115 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:23.373 [2024-11-20 09:53:00.242769] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.373 [2024-11-20 09:53:00.248361] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:23.373 [2024-11-20 09:53:00.248395] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:23.373 [2024-11-20 09:53:00.248446] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:23.373 [2024-11-20 09:53:00.248955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c582c0 (107): Transport endpoint is not connected 00:19:23.373 [2024-11-20 09:53:00.249944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c582c0 (9): Bad file descriptor 00:19:23.373 [2024-11-20 09:53:00.250944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:23.373 [2024-11-20 09:53:00.250965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:23.373 [2024-11-20 09:53:00.250987] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:23.373 [2024-11-20 09:53:00.251005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:23.373 request: 00:19:23.373 { 00:19:23.373 "name": "TLSTEST", 00:19:23.373 "trtype": "tcp", 00:19:23.373 "traddr": "10.0.0.2", 00:19:23.373 "adrfam": "ipv4", 00:19:23.373 "trsvcid": "4420", 00:19:23.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.373 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:23.373 "prchk_reftag": false, 00:19:23.373 "prchk_guard": false, 00:19:23.373 "hdgst": false, 00:19:23.373 "ddgst": false, 00:19:23.373 "psk": "key0", 00:19:23.373 "allow_unrecognized_csi": false, 00:19:23.373 "method": "bdev_nvme_attach_controller", 00:19:23.373 "req_id": 1 00:19:23.373 } 00:19:23.373 Got JSON-RPC error response 00:19:23.373 response: 00:19:23.373 { 00:19:23.373 "code": -5, 00:19:23.373 "message": "Input/output error" 00:19:23.373 } 00:19:23.373 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3757375 00:19:23.373 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3757375 ']' 00:19:23.373 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3757375 00:19:23.373 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.373 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.373 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757375 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757375' 00:19:23.632 killing process with pid 3757375 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3757375 00:19:23.632 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.632 00:19:23.632 Latency(us) 00:19:23.632 [2024-11-20T08:53:00.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.632 [2024-11-20T08:53:00.546Z] =================================================================================================================== 00:19:23.632 [2024-11-20T08:53:00.546Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3757375 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.odUnBvpkrd 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.odUnBvpkrd 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.odUnBvpkrd 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.odUnBvpkrd 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3757517 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3757517 /var/tmp/bdevperf.sock 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3757517 ']' 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.632 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.890 [2024-11-20 09:53:00.556858] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:23.890 [2024-11-20 09:53:00.556951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757517 ] 00:19:23.890 [2024-11-20 09:53:00.625002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.890 [2024-11-20 09:53:00.684006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.890 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.890 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.890 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.odUnBvpkrd 00:19:24.457 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.457 [2024-11-20 09:53:01.338763] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.457 [2024-11-20 09:53:01.350890] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:24.457 [2024-11-20 09:53:01.350922] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:24.457 [2024-11-20 09:53:01.350972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:24.457 [2024-11-20 09:53:01.351005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d12c0 (107): Transport endpoint is not connected 00:19:24.457 [2024-11-20 09:53:01.351968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d12c0 (9): Bad file descriptor 00:19:24.457 [2024-11-20 09:53:01.352968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:24.457 [2024-11-20 09:53:01.352989] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:24.457 [2024-11-20 09:53:01.353003] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:24.457 [2024-11-20 09:53:01.353021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:24.457 request: 00:19:24.457 { 00:19:24.457 "name": "TLSTEST", 00:19:24.457 "trtype": "tcp", 00:19:24.457 "traddr": "10.0.0.2", 00:19:24.457 "adrfam": "ipv4", 00:19:24.457 "trsvcid": "4420", 00:19:24.457 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:24.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.457 "prchk_reftag": false, 00:19:24.457 "prchk_guard": false, 00:19:24.457 "hdgst": false, 00:19:24.457 "ddgst": false, 00:19:24.457 "psk": "key0", 00:19:24.457 "allow_unrecognized_csi": false, 00:19:24.457 "method": "bdev_nvme_attach_controller", 00:19:24.457 "req_id": 1 00:19:24.457 } 00:19:24.457 Got JSON-RPC error response 00:19:24.457 response: 00:19:24.457 { 00:19:24.457 "code": -5, 00:19:24.457 "message": "Input/output error" 00:19:24.457 } 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3757517 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3757517 ']' 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3757517 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757517 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757517' 00:19:24.717 killing process with pid 3757517 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3757517 00:19:24.717 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.717 00:19:24.717 Latency(us) 00:19:24.717 [2024-11-20T08:53:01.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.717 [2024-11-20T08:53:01.631Z] =================================================================================================================== 00:19:24.717 [2024-11-20T08:53:01.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.717 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3757517 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3757662 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3757662 /var/tmp/bdevperf.sock 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3757662 ']' 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.975 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.975 [2024-11-20 09:53:01.686110] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:24.975 [2024-11-20 09:53:01.686209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757662 ] 00:19:24.975 [2024-11-20 09:53:01.755297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.975 [2024-11-20 09:53:01.814056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.233 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.233 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.233 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:25.492 [2024-11-20 09:53:02.186543] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:25.492 [2024-11-20 09:53:02.186589] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:25.492 request: 00:19:25.492 { 00:19:25.492 "name": "key0", 00:19:25.492 "path": "", 00:19:25.492 "method": "keyring_file_add_key", 00:19:25.492 "req_id": 1 00:19:25.492 } 00:19:25.492 Got JSON-RPC error response 00:19:25.492 response: 00:19:25.492 { 00:19:25.492 "code": -1, 00:19:25.492 "message": "Operation not permitted" 00:19:25.492 } 00:19:25.492 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:25.751 [2024-11-20 09:53:02.463439] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.751 [2024-11-20 09:53:02.463505] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:25.751 request: 00:19:25.751 { 00:19:25.751 "name": "TLSTEST", 00:19:25.751 "trtype": "tcp", 00:19:25.751 "traddr": "10.0.0.2", 00:19:25.751 "adrfam": "ipv4", 00:19:25.751 "trsvcid": "4420", 00:19:25.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.751 "prchk_reftag": false, 00:19:25.751 "prchk_guard": false, 00:19:25.751 "hdgst": false, 00:19:25.751 "ddgst": false, 00:19:25.751 "psk": "key0", 00:19:25.751 "allow_unrecognized_csi": false, 00:19:25.751 "method": "bdev_nvme_attach_controller", 00:19:25.751 "req_id": 1 00:19:25.751 } 00:19:25.751 Got JSON-RPC error response 00:19:25.751 response: 00:19:25.751 { 00:19:25.751 "code": -126, 00:19:25.751 "message": "Required key not available" 00:19:25.751 } 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3757662 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3757662 ']' 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3757662 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757662 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757662' 00:19:25.751 killing process with pid 3757662 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3757662 00:19:25.751 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.751 00:19:25.751 Latency(us) 00:19:25.751 [2024-11-20T08:53:02.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.751 [2024-11-20T08:53:02.665Z] =================================================================================================================== 00:19:25.751 [2024-11-20T08:53:02.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.751 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3757662 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3754008 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3754008 ']' 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3754008 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3754008 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3754008' 00:19:26.009 killing process with pid 3754008 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3754008 00:19:26.009 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3754008 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.pBzmfrvr1q 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.pBzmfrvr1q 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3757829 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3757829 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3757829 ']' 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.267 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.267 [2024-11-20 09:53:03.132554] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:26.267 [2024-11-20 09:53:03.132659] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.526 [2024-11-20 09:53:03.206489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.526 [2024-11-20 09:53:03.260076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.526 [2024-11-20 09:53:03.260149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.526 [2024-11-20 09:53:03.260171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.526 [2024-11-20 09:53:03.260182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.526 [2024-11-20 09:53:03.260191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.526 [2024-11-20 09:53:03.260777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.pBzmfrvr1q 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pBzmfrvr1q 00:19:26.526 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:26.784 [2024-11-20 09:53:03.639374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.784 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.042 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.300 [2024-11-20 09:53:04.156705] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.300 [2024-11-20 09:53:04.156941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.300 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.866 malloc0 00:19:27.866 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:28.124 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:19:28.382 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pBzmfrvr1q 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pBzmfrvr1q 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3758102 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3758102 /var/tmp/bdevperf.sock 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3758102 ']' 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.640 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.640 [2024-11-20 09:53:05.346809] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:28.640 [2024-11-20 09:53:05.346901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758102 ] 00:19:28.640 [2024-11-20 09:53:05.418214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.640 [2024-11-20 09:53:05.478782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.898 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.898 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.898 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:19:29.156 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.414 [2024-11-20 09:53:06.108831] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.414 TLSTESTn1 00:19:29.414 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:29.414 Running I/O for 10 seconds... 00:19:31.718 3470.00 IOPS, 13.55 MiB/s [2024-11-20T08:53:09.564Z] 3482.00 IOPS, 13.60 MiB/s [2024-11-20T08:53:10.497Z] 3479.00 IOPS, 13.59 MiB/s [2024-11-20T08:53:11.430Z] 3469.00 IOPS, 13.55 MiB/s [2024-11-20T08:53:12.365Z] 3482.60 IOPS, 13.60 MiB/s [2024-11-20T08:53:13.738Z] 3496.33 IOPS, 13.66 MiB/s [2024-11-20T08:53:14.683Z] 3480.43 IOPS, 13.60 MiB/s [2024-11-20T08:53:15.702Z] 3492.12 IOPS, 13.64 MiB/s [2024-11-20T08:53:16.635Z] 3489.33 IOPS, 13.63 MiB/s [2024-11-20T08:53:16.635Z] 3486.10 IOPS, 13.62 MiB/s 00:19:39.721 Latency(us) 00:19:39.721 [2024-11-20T08:53:16.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.721 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:39.721 Verification LBA range: start 0x0 length 0x2000 00:19:39.721 TLSTESTn1 : 10.02 3491.17 13.64 0.00 0.00 36601.14 8155.59 34369.99 00:19:39.721 [2024-11-20T08:53:16.635Z] =================================================================================================================== 00:19:39.721 [2024-11-20T08:53:16.635Z] Total : 3491.17 13.64 0.00 0.00 36601.14 8155.59 34369.99 00:19:39.721 { 00:19:39.721 "results": [ 00:19:39.721 { 00:19:39.721 "job": "TLSTESTn1", 00:19:39.721 "core_mask": "0x4", 00:19:39.721 "workload": "verify", 00:19:39.721 "status": "finished", 00:19:39.721 "verify_range": { 00:19:39.721 "start": 0, 00:19:39.721 "length": 8192 00:19:39.721 }, 00:19:39.721 "queue_depth": 128, 00:19:39.722 "io_size": 4096, 00:19:39.722 "runtime": 10.022149, 00:19:39.722 "iops": 3491.16741329629, 00:19:39.722 "mibps": 13.637372708188632, 00:19:39.722 "io_failed": 0, 00:19:39.722 "io_timeout": 0, 00:19:39.722 "avg_latency_us": 36601.13900499946, 00:19:39.722 "min_latency_us": 8155.591111111111, 00:19:39.722 "max_latency_us": 34369.991111111114 00:19:39.722 } 00:19:39.722 ], 00:19:39.722 "core_count": 1 00:19:39.722 } 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3758102 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3758102 ']' 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3758102 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3758102 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3758102' 00:19:39.722 killing process with pid 3758102 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3758102 00:19:39.722 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.722 00:19:39.722 Latency(us) 00:19:39.722 [2024-11-20T08:53:16.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.722 [2024-11-20T08:53:16.636Z] =================================================================================================================== 00:19:39.722 [2024-11-20T08:53:16.636Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.722 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3758102 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.pBzmfrvr1q 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pBzmfrvr1q 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pBzmfrvr1q 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pBzmfrvr1q 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pBzmfrvr1q 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3759423 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3759423 /var/tmp/bdevperf.sock 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3759423 ']' 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.980 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.980 [2024-11-20 09:53:16.709604] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:39.980 [2024-11-20 09:53:16.709711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3759423 ] 00:19:39.980 [2024-11-20 09:53:16.779058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.980 [2024-11-20 09:53:16.839101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.238 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.238 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.238 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:19:40.497 [2024-11-20 09:53:17.185688] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pBzmfrvr1q': 0100666 00:19:40.497 [2024-11-20 09:53:17.185737] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:40.497 request: 00:19:40.497 { 00:19:40.497 "name": "key0", 00:19:40.497 "path": "/tmp/tmp.pBzmfrvr1q", 00:19:40.497 "method": "keyring_file_add_key", 00:19:40.497 "req_id": 1 00:19:40.497 } 00:19:40.497 Got JSON-RPC error response 00:19:40.497 response: 00:19:40.497 { 00:19:40.497 "code": -1, 00:19:40.497 "message": "Operation not permitted" 00:19:40.497 } 00:19:40.497 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.754 [2024-11-20 09:53:17.454522] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.754 [2024-11-20 09:53:17.454590] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:40.754 request: 00:19:40.754 { 00:19:40.754 "name": "TLSTEST", 00:19:40.754 "trtype": "tcp", 00:19:40.754 "traddr": "10.0.0.2", 00:19:40.754 "adrfam": "ipv4", 00:19:40.754 "trsvcid": "4420", 00:19:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.755 "prchk_reftag": false, 00:19:40.755 "prchk_guard": false, 00:19:40.755 "hdgst": false, 00:19:40.755 "ddgst": false, 00:19:40.755 "psk": "key0", 00:19:40.755 "allow_unrecognized_csi": false, 00:19:40.755 "method": "bdev_nvme_attach_controller", 00:19:40.755 "req_id": 1 00:19:40.755 } 00:19:40.755 Got JSON-RPC error response 00:19:40.755 response: 00:19:40.755 { 00:19:40.755 "code": -126, 00:19:40.755 "message": "Required key not available" 00:19:40.755 } 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3759423 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3759423 ']' 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3759423 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3759423 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3759423' 00:19:40.755 killing process with pid 3759423 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3759423 00:19:40.755 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.755 00:19:40.755 Latency(us) 00:19:40.755 [2024-11-20T08:53:17.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.755 [2024-11-20T08:53:17.669Z] =================================================================================================================== 00:19:40.755 [2024-11-20T08:53:17.669Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.755 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3759423 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3757829 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3757829 ']' 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3757829 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757829 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757829' 00:19:41.013 killing process with pid 3757829 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3757829 00:19:41.013 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3757829 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3759691 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3759691 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3759691 ']' 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.271 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.271 [2024-11-20 09:53:18.063016] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:41.271 [2024-11-20 09:53:18.063133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.271 [2024-11-20 09:53:18.132899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.529 [2024-11-20 09:53:18.184020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.529 [2024-11-20 09:53:18.184084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.529 [2024-11-20 09:53:18.184104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.529 [2024-11-20 09:53:18.184115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.529 [2024-11-20 09:53:18.184124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.529 [2024-11-20 09:53:18.184830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.pBzmfrvr1q 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pBzmfrvr1q 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.pBzmfrvr1q 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pBzmfrvr1q 00:19:41.529 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.787 [2024-11-20 09:53:18.576297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.787 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:42.045 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:42.303 [2024-11-20 09:53:19.121742] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.303 [2024-11-20 09:53:19.121965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.303 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.560 malloc0 00:19:42.561 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:43.126 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:19:43.126 [2024-11-20 09:53:19.990318] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pBzmfrvr1q': 0100666 00:19:43.126 [2024-11-20 09:53:19.990351] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:43.126 request: 00:19:43.126 { 00:19:43.126 "name": "key0", 00:19:43.126 "path": "/tmp/tmp.pBzmfrvr1q", 00:19:43.126 "method": "keyring_file_add_key", 00:19:43.126 "req_id": 1 00:19:43.126 } 00:19:43.126 Got JSON-RPC error response 00:19:43.126 response: 00:19:43.126 { 00:19:43.126 "code": -1, 00:19:43.126 "message": "Operation not permitted" 00:19:43.126 } 00:19:43.126 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.383 [2024-11-20 09:53:20.279185] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:43.383 [2024-11-20 09:53:20.279249] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:43.383 request: 00:19:43.383 { 00:19:43.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.383 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.383 "psk": "key0", 00:19:43.383 "method": "nvmf_subsystem_add_host", 00:19:43.383 "req_id": 1 00:19:43.383 } 00:19:43.383 Got JSON-RPC error response 00:19:43.383 response: 00:19:43.383 { 00:19:43.383 "code": -32603, 00:19:43.383 "message": "Internal error" 00:19:43.383 } 00:19:43.383 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:43.383 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:43.383 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:43.383 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3759691 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3759691 ']' 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3759691 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3759691 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3759691' 00:19:43.641 killing process with pid 3759691 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3759691 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3759691 00:19:43.641 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.pBzmfrvr1q 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3759988 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3759988 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3759988 ']' 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.900 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.900 [2024-11-20 09:53:20.606017] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:43.900 [2024-11-20 09:53:20.606113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.900 [2024-11-20 09:53:20.681515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.900 [2024-11-20 09:53:20.743840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.901 [2024-11-20 09:53:20.743891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.901 [2024-11-20 09:53:20.743904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.901 [2024-11-20 09:53:20.743915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.901 [2024-11-20 09:53:20.743925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.901 [2024-11-20 09:53:20.744547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.pBzmfrvr1q 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pBzmfrvr1q 00:19:44.159 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.418 [2024-11-20 09:53:21.191574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.418 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.676 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.934 [2024-11-20 09:53:21.813252] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.934 [2024-11-20 09:53:21.813528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.934 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.192 malloc0 00:19:45.450 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.708 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:19:45.966 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3760280 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3760280 /var/tmp/bdevperf.sock 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3760280 ']' 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.224 09:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.224 [2024-11-20 09:53:22.957222] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:46.224 [2024-11-20 09:53:22.957338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760280 ] 00:19:46.224 [2024-11-20 09:53:23.024559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.224 [2024-11-20 09:53:23.083434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.481 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.481 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:46.481 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:19:46.739 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.996 [2024-11-20 09:53:23.727559] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.996 TLSTESTn1 00:19:46.996 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:47.253 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:47.253 "subsystems": [ 00:19:47.253 { 00:19:47.253 "subsystem": "keyring", 00:19:47.253 "config": [ 00:19:47.253 { 00:19:47.253 "method": "keyring_file_add_key", 00:19:47.253 "params": { 00:19:47.253 "name": "key0", 00:19:47.253 "path": "/tmp/tmp.pBzmfrvr1q" 00:19:47.253 } 00:19:47.253 } 00:19:47.253 ] 00:19:47.253 }, 00:19:47.253 { 00:19:47.253 "subsystem": "iobuf", 00:19:47.253 "config": [ 00:19:47.253 { 00:19:47.253 "method": "iobuf_set_options", 00:19:47.253 "params": { 00:19:47.253 "small_pool_count": 8192, 00:19:47.253 "large_pool_count": 1024, 00:19:47.253 "small_bufsize": 8192, 00:19:47.253 "large_bufsize": 135168, 00:19:47.253 "enable_numa": false 00:19:47.253 } 00:19:47.253 } 00:19:47.253 ] 00:19:47.253 }, 00:19:47.253 { 00:19:47.253 "subsystem": "sock", 00:19:47.253 "config": [ 00:19:47.253 { 00:19:47.253 "method": "sock_set_default_impl", 00:19:47.253 "params": { 00:19:47.253 "impl_name": "posix" 00:19:47.253 } 00:19:47.253 }, 00:19:47.253 { 00:19:47.253 "method": "sock_impl_set_options", 00:19:47.253 "params": { 00:19:47.253 "impl_name": "ssl", 00:19:47.253 "recv_buf_size": 4096, 00:19:47.253 "send_buf_size": 4096, 00:19:47.253 "enable_recv_pipe": true, 00:19:47.253 "enable_quickack": false, 00:19:47.253 "enable_placement_id": 0, 00:19:47.253 "enable_zerocopy_send_server": true, 00:19:47.253 "enable_zerocopy_send_client": false, 00:19:47.253 "zerocopy_threshold": 0, 00:19:47.253 "tls_version": 0, 00:19:47.253 "enable_ktls": false 00:19:47.253 } 00:19:47.253 }, 00:19:47.253 { 00:19:47.253 "method": "sock_impl_set_options", 00:19:47.253 "params": { 00:19:47.253 "impl_name": "posix", 00:19:47.253 "recv_buf_size": 2097152, 00:19:47.253 "send_buf_size": 2097152, 00:19:47.253 "enable_recv_pipe": true, 00:19:47.253 "enable_quickack": false, 00:19:47.253 "enable_placement_id": 0, 00:19:47.253 "enable_zerocopy_send_server": true, 00:19:47.253 "enable_zerocopy_send_client": false, 00:19:47.253 "zerocopy_threshold": 0, 00:19:47.253 "tls_version": 0, 00:19:47.253 "enable_ktls": false 00:19:47.253 } 00:19:47.253 } 00:19:47.253 ] 00:19:47.253 }, 00:19:47.253 { 00:19:47.253 "subsystem": "vmd", 00:19:47.253 "config": [] 00:19:47.253 }, 00:19:47.253 { 00:19:47.253 "subsystem": "accel", 00:19:47.253 "config": [ 00:19:47.253 { 00:19:47.253 "method": "accel_set_options", 00:19:47.253 "params": { 00:19:47.253 "small_cache_size": 128, 00:19:47.253 "large_cache_size": 16, 00:19:47.254 "task_count": 2048, 00:19:47.254 "sequence_count": 2048, 00:19:47.254 "buf_count": 2048 00:19:47.254 } 00:19:47.254 } 00:19:47.254 ] 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "subsystem": "bdev", 00:19:47.254 "config": [ 00:19:47.254 { 00:19:47.254 "method": "bdev_set_options", 00:19:47.254 "params": { 00:19:47.254 "bdev_io_pool_size": 65535, 00:19:47.254 "bdev_io_cache_size": 256, 00:19:47.254 "bdev_auto_examine": true, 00:19:47.254 "iobuf_small_cache_size": 128, 00:19:47.254 "iobuf_large_cache_size": 16 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "bdev_raid_set_options", 00:19:47.254 "params": { 00:19:47.254 "process_window_size_kb": 1024, 00:19:47.254 "process_max_bandwidth_mb_sec": 0 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "bdev_iscsi_set_options", 00:19:47.254 "params": { 00:19:47.254 "timeout_sec": 30 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "bdev_nvme_set_options", 00:19:47.254 "params": { 00:19:47.254 "action_on_timeout": "none", 00:19:47.254 "timeout_us": 0, 00:19:47.254 "timeout_admin_us": 0, 00:19:47.254 "keep_alive_timeout_ms": 10000, 00:19:47.254 "arbitration_burst": 0, 00:19:47.254 "low_priority_weight": 0, 00:19:47.254 "medium_priority_weight": 0, 00:19:47.254 "high_priority_weight": 0, 00:19:47.254 "nvme_adminq_poll_period_us": 10000, 00:19:47.254 "nvme_ioq_poll_period_us": 0, 00:19:47.254 "io_queue_requests": 0, 00:19:47.254 "delay_cmd_submit": true, 00:19:47.254 "transport_retry_count": 4, 00:19:47.254 "bdev_retry_count": 3, 00:19:47.254 "transport_ack_timeout": 0, 00:19:47.254 "ctrlr_loss_timeout_sec": 0, 00:19:47.254 "reconnect_delay_sec": 0, 00:19:47.254 "fast_io_fail_timeout_sec": 0, 00:19:47.254 "disable_auto_failback": false, 00:19:47.254 "generate_uuids": false, 00:19:47.254 "transport_tos": 0, 00:19:47.254 "nvme_error_stat": false, 00:19:47.254 "rdma_srq_size": 0, 00:19:47.254 "io_path_stat": false, 00:19:47.254 "allow_accel_sequence": false, 00:19:47.254 "rdma_max_cq_size": 0, 00:19:47.254 "rdma_cm_event_timeout_ms": 0, 00:19:47.254 "dhchap_digests": [ 00:19:47.254 "sha256", 00:19:47.254 "sha384", 00:19:47.254 "sha512" 00:19:47.254 ], 00:19:47.254 "dhchap_dhgroups": [ 00:19:47.254 "null", 00:19:47.254 "ffdhe2048", 00:19:47.254 "ffdhe3072", 00:19:47.254 "ffdhe4096", 00:19:47.254 "ffdhe6144", 00:19:47.254 "ffdhe8192" 00:19:47.254 ] 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "bdev_nvme_set_hotplug", 00:19:47.254 "params": { 00:19:47.254 "period_us": 100000, 00:19:47.254 "enable": false 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "bdev_malloc_create", 00:19:47.254 "params": { 00:19:47.254 "name": "malloc0", 00:19:47.254 "num_blocks": 8192, 00:19:47.254 "block_size": 4096, 00:19:47.254 "physical_block_size": 4096, 00:19:47.254 "uuid": "f57e6c0c-12b2-43d7-91af-00ad03be4a50", 00:19:47.254 "optimal_io_boundary": 0, 00:19:47.254 "md_size": 0, 00:19:47.254 "dif_type": 0, 00:19:47.254 "dif_is_head_of_md": false, 00:19:47.254 "dif_pi_format": 0 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "bdev_wait_for_examine" 00:19:47.254 } 00:19:47.254 ] 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "subsystem": "nbd", 00:19:47.254 "config": [] 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "subsystem": "scheduler", 00:19:47.254 "config": [ 00:19:47.254 { 00:19:47.254 "method": "framework_set_scheduler", 00:19:47.254 "params": { 00:19:47.254 "name": "static" 00:19:47.254 } 00:19:47.254 } 00:19:47.254 ] 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "subsystem": "nvmf", 00:19:47.254 "config": [ 00:19:47.254 { 00:19:47.254 "method": "nvmf_set_config", 00:19:47.254 "params": { 00:19:47.254 "discovery_filter": "match_any", 00:19:47.254 "admin_cmd_passthru": { 00:19:47.254 "identify_ctrlr": false 00:19:47.254 }, 00:19:47.254 "dhchap_digests": [ 00:19:47.254 "sha256", 00:19:47.254 "sha384", 00:19:47.254 "sha512" 00:19:47.254 ], 00:19:47.254 "dhchap_dhgroups": [ 00:19:47.254 "null", 00:19:47.254 "ffdhe2048", 00:19:47.254 "ffdhe3072", 00:19:47.254 "ffdhe4096", 00:19:47.254 "ffdhe6144", 00:19:47.254 "ffdhe8192" 00:19:47.254 ] 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "nvmf_set_max_subsystems", 00:19:47.254 "params": { 00:19:47.254 "max_subsystems": 1024 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "nvmf_set_crdt", 00:19:47.254 "params": { 00:19:47.254 "crdt1": 0, 00:19:47.254 "crdt2": 0, 00:19:47.254 "crdt3": 0 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "nvmf_create_transport", 00:19:47.254 "params": { 00:19:47.254 "trtype": "TCP", 00:19:47.254 "max_queue_depth": 128, 00:19:47.254 "max_io_qpairs_per_ctrlr": 127, 00:19:47.254 "in_capsule_data_size": 4096, 00:19:47.254 "max_io_size": 131072, 00:19:47.254 "io_unit_size": 131072, 00:19:47.254 "max_aq_depth": 128, 00:19:47.254 "num_shared_buffers": 511, 00:19:47.254 "buf_cache_size": 4294967295, 00:19:47.254 "dif_insert_or_strip": false, 00:19:47.254 "zcopy": false, 00:19:47.254 "c2h_success": false, 00:19:47.254 "sock_priority": 0, 00:19:47.254 "abort_timeout_sec": 1, 00:19:47.254 "ack_timeout": 0, 00:19:47.254 "data_wr_pool_size": 0 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "nvmf_create_subsystem", 00:19:47.254 "params": { 00:19:47.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.254 "allow_any_host": false, 00:19:47.254 "serial_number": "SPDK00000000000001", 00:19:47.254 "model_number": "SPDK bdev Controller", 00:19:47.254 "max_namespaces": 10, 00:19:47.254 "min_cntlid": 1, 00:19:47.254 "max_cntlid": 65519, 00:19:47.254 "ana_reporting": false 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "nvmf_subsystem_add_host", 00:19:47.254 "params": { 00:19:47.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.254 "host": "nqn.2016-06.io.spdk:host1", 00:19:47.254 "psk": "key0" 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "nvmf_subsystem_add_ns", 00:19:47.254 "params": { 00:19:47.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.254 "namespace": { 00:19:47.254 "nsid": 1, 00:19:47.254 "bdev_name": "malloc0", 00:19:47.254 "nguid": "F57E6C0C12B243D791AF00AD03BE4A50", 00:19:47.254 "uuid": "f57e6c0c-12b2-43d7-91af-00ad03be4a50", 00:19:47.254 "no_auto_visible": false 00:19:47.254 } 00:19:47.254 } 00:19:47.254 }, 00:19:47.254 { 00:19:47.254 "method": "nvmf_subsystem_add_listener", 00:19:47.254 "params": { 00:19:47.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.254 "listen_address": { 00:19:47.254 "trtype": "TCP", 00:19:47.254 "adrfam": "IPv4", 00:19:47.254 "traddr": "10.0.0.2", 00:19:47.254 "trsvcid": "4420" 00:19:47.254 }, 00:19:47.254 "secure_channel": true 00:19:47.254 } 00:19:47.254 } 00:19:47.254 ] 00:19:47.254 } 00:19:47.254 ] 00:19:47.254 }' 00:19:47.254 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:47.818 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:47.818 "subsystems": [ 00:19:47.818 { 00:19:47.818 "subsystem": "keyring", 00:19:47.818 "config": [ 00:19:47.818 { 00:19:47.818 "method": "keyring_file_add_key", 00:19:47.818 "params": { 00:19:47.818 "name": "key0", 00:19:47.818 "path": "/tmp/tmp.pBzmfrvr1q" 00:19:47.818 } 00:19:47.818 } 00:19:47.819 ] 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "subsystem": "iobuf", 00:19:47.819 "config": [ 00:19:47.819 { 00:19:47.819 "method": "iobuf_set_options", 00:19:47.819 "params": { 00:19:47.819 "small_pool_count": 8192, 00:19:47.819 "large_pool_count": 1024, 00:19:47.819 "small_bufsize": 8192, 00:19:47.819 "large_bufsize": 135168, 00:19:47.819 "enable_numa": false 00:19:47.819 } 00:19:47.819 } 00:19:47.819 ] 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "subsystem": "sock", 00:19:47.819 "config": [ 00:19:47.819 { 00:19:47.819 "method": "sock_set_default_impl", 00:19:47.819 "params": { 00:19:47.819 "impl_name": "posix" 00:19:47.819 } 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "method": "sock_impl_set_options", 00:19:47.819 "params": { 00:19:47.819 "impl_name": "ssl", 00:19:47.819 "recv_buf_size": 4096, 00:19:47.819 "send_buf_size": 4096, 00:19:47.819 "enable_recv_pipe": true, 00:19:47.819 "enable_quickack": false, 00:19:47.819 "enable_placement_id": 0, 00:19:47.819 "enable_zerocopy_send_server": true, 00:19:47.819 "enable_zerocopy_send_client": false, 00:19:47.819 "zerocopy_threshold": 0, 00:19:47.819 "tls_version": 0, 00:19:47.819 "enable_ktls": false 00:19:47.819 } 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "method": "sock_impl_set_options", 00:19:47.819 "params": { 00:19:47.819 "impl_name": "posix", 00:19:47.819 "recv_buf_size": 2097152, 00:19:47.819 "send_buf_size": 2097152, 00:19:47.819 "enable_recv_pipe": true, 00:19:47.819 "enable_quickack": false, 00:19:47.819 "enable_placement_id": 0, 00:19:47.819 "enable_zerocopy_send_server": true, 00:19:47.819 "enable_zerocopy_send_client": false, 00:19:47.819 "zerocopy_threshold": 0, 00:19:47.819 "tls_version": 0, 00:19:47.819 "enable_ktls": false 00:19:47.819 } 00:19:47.819 } 00:19:47.819 ] 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "subsystem": "vmd", 00:19:47.819 "config": [] 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "subsystem": "accel", 00:19:47.819 "config": [ 00:19:47.819 { 00:19:47.819 "method": "accel_set_options", 00:19:47.819 "params": { 00:19:47.819 "small_cache_size": 128, 00:19:47.819 "large_cache_size": 16, 00:19:47.819 "task_count": 2048, 00:19:47.819 "sequence_count": 2048, 00:19:47.819 "buf_count": 2048 00:19:47.819 } 00:19:47.819 } 00:19:47.819 ] 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "subsystem": "bdev", 00:19:47.819 "config": [ 00:19:47.819 { 00:19:47.819 "method": "bdev_set_options", 00:19:47.819 "params": { 00:19:47.819 "bdev_io_pool_size": 65535, 00:19:47.819 "bdev_io_cache_size": 256, 00:19:47.819 "bdev_auto_examine": true, 00:19:47.819 "iobuf_small_cache_size": 128, 00:19:47.819 "iobuf_large_cache_size": 16 00:19:47.819 } 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "method": "bdev_raid_set_options", 00:19:47.819 "params": { 00:19:47.819 "process_window_size_kb": 1024, 00:19:47.819 "process_max_bandwidth_mb_sec": 0 00:19:47.819 } 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "method": "bdev_iscsi_set_options", 00:19:47.819 "params": { 00:19:47.819 "timeout_sec": 30 00:19:47.819 } 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "method": "bdev_nvme_set_options", 00:19:47.819 "params": { 00:19:47.819 "action_on_timeout": "none", 00:19:47.819 "timeout_us": 0, 00:19:47.819 "timeout_admin_us": 0, 00:19:47.819 "keep_alive_timeout_ms": 10000, 00:19:47.819 "arbitration_burst": 0, 00:19:47.819 "low_priority_weight": 0, 00:19:47.819 "medium_priority_weight": 0, 00:19:47.819 "high_priority_weight": 0, 00:19:47.819 "nvme_adminq_poll_period_us": 10000, 00:19:47.819 "nvme_ioq_poll_period_us": 0, 00:19:47.819 "io_queue_requests": 512, 00:19:47.819 "delay_cmd_submit": true, 00:19:47.819 "transport_retry_count": 4, 00:19:47.819 "bdev_retry_count": 3, 00:19:47.819 "transport_ack_timeout": 0, 00:19:47.819 "ctrlr_loss_timeout_sec": 0, 00:19:47.819 "reconnect_delay_sec": 0, 00:19:47.819 "fast_io_fail_timeout_sec": 0, 00:19:47.819 "disable_auto_failback": false, 00:19:47.819 "generate_uuids": false, 00:19:47.819 "transport_tos": 0, 00:19:47.819 "nvme_error_stat": false, 00:19:47.819 "rdma_srq_size": 0, 00:19:47.819 "io_path_stat": false, 00:19:47.819 "allow_accel_sequence": false, 00:19:47.819 "rdma_max_cq_size": 0, 00:19:47.819 "rdma_cm_event_timeout_ms": 0, 00:19:47.819 "dhchap_digests": [ 00:19:47.819 "sha256", 00:19:47.819 "sha384", 00:19:47.819 "sha512" 00:19:47.819 ], 00:19:47.819 "dhchap_dhgroups": [ 00:19:47.819 "null", 00:19:47.819 "ffdhe2048", 00:19:47.819 "ffdhe3072", 00:19:47.819 "ffdhe4096", 00:19:47.819 "ffdhe6144", 00:19:47.819 "ffdhe8192" 00:19:47.819 ] 00:19:47.819 } 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "method": "bdev_nvme_attach_controller", 00:19:47.819 "params": { 00:19:47.819 "name": "TLSTEST", 00:19:47.819 "trtype": "TCP", 00:19:47.819 "adrfam": "IPv4", 00:19:47.819 "traddr": "10.0.0.2", 00:19:47.819 "trsvcid": "4420", 00:19:47.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.819 "prchk_reftag": false, 00:19:47.819 "prchk_guard": false, 00:19:47.819 "ctrlr_loss_timeout_sec": 0, 00:19:47.819 "reconnect_delay_sec": 0, 00:19:47.819 "fast_io_fail_timeout_sec": 0, 00:19:47.819 "psk": "key0", 00:19:47.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.819 "hdgst": false, 00:19:47.819 "ddgst": false, 00:19:47.819 "multipath": "multipath" 00:19:47.819 } 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "method": "bdev_nvme_set_hotplug", 00:19:47.819 "params": { 00:19:47.819 "period_us": 100000, 00:19:47.819 "enable": false 00:19:47.819 } 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "method": "bdev_wait_for_examine" 00:19:47.819 } 00:19:47.819 ] 00:19:47.819 }, 00:19:47.819 { 00:19:47.819 "subsystem": "nbd", 00:19:47.819 "config": [] 00:19:47.819 } 00:19:47.819 ] 00:19:47.819 }' 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3760280 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3760280 ']' 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3760280 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3760280 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3760280' 00:19:47.819 killing process with pid 3760280 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3760280 00:19:47.819 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.819 00:19:47.819 Latency(us) 00:19:47.819 [2024-11-20T08:53:24.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.819 [2024-11-20T08:53:24.733Z] =================================================================================================================== 00:19:47.819 [2024-11-20T08:53:24.733Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.819 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3760280 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3759988 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3759988 ']' 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3759988 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3759988 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3759988' 00:19:48.077 killing process with pid 3759988 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3759988 00:19:48.077 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3759988 00:19:48.336 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:48.336 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.336 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:48.336 "subsystems": [ 00:19:48.336 { 00:19:48.336 "subsystem": "keyring", 00:19:48.336 "config": [ 00:19:48.336 { 00:19:48.336 "method": "keyring_file_add_key", 00:19:48.336 "params": { 00:19:48.336 "name": "key0", 00:19:48.336 "path": "/tmp/tmp.pBzmfrvr1q" 00:19:48.336 } 00:19:48.336 } 00:19:48.336 ] 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "subsystem": "iobuf", 00:19:48.336 "config": [ 00:19:48.336 { 00:19:48.336 "method": "iobuf_set_options", 00:19:48.336 "params": { 00:19:48.336 "small_pool_count": 8192, 00:19:48.336 "large_pool_count": 1024, 00:19:48.336 "small_bufsize": 8192, 00:19:48.336 "large_bufsize": 135168, 00:19:48.336 "enable_numa": false 00:19:48.336 } 00:19:48.336 } 00:19:48.336 ] 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "subsystem": "sock", 00:19:48.336 "config": [ 00:19:48.336 { 00:19:48.336 "method": "sock_set_default_impl", 00:19:48.336 "params": { 00:19:48.336 "impl_name": "posix" 00:19:48.336 } 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "method": "sock_impl_set_options", 00:19:48.336 "params": { 00:19:48.336 "impl_name": "ssl", 00:19:48.336 "recv_buf_size": 4096, 00:19:48.336 "send_buf_size": 4096, 00:19:48.336 "enable_recv_pipe": true, 00:19:48.336 "enable_quickack": false, 00:19:48.336 "enable_placement_id": 0, 00:19:48.336 "enable_zerocopy_send_server": true, 00:19:48.336 "enable_zerocopy_send_client": false, 00:19:48.336 "zerocopy_threshold": 0, 00:19:48.336 "tls_version": 0, 00:19:48.336 "enable_ktls": false 00:19:48.336 } 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "method": "sock_impl_set_options", 00:19:48.336 "params": { 00:19:48.336 "impl_name": "posix", 00:19:48.336 "recv_buf_size": 2097152, 00:19:48.336 "send_buf_size": 2097152, 00:19:48.336 "enable_recv_pipe": true, 00:19:48.336 "enable_quickack": false, 00:19:48.336 "enable_placement_id": 0, 00:19:48.336 "enable_zerocopy_send_server": true, 00:19:48.336 "enable_zerocopy_send_client": false, 00:19:48.336 "zerocopy_threshold": 0, 00:19:48.336 "tls_version": 0, 00:19:48.336 "enable_ktls": false 00:19:48.336 } 00:19:48.336 } 00:19:48.336 ] 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "subsystem": "vmd", 00:19:48.336 "config": [] 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "subsystem": "accel", 00:19:48.336 "config": [ 00:19:48.336 { 00:19:48.336 "method": "accel_set_options", 00:19:48.336 "params": { 00:19:48.336 "small_cache_size": 128, 00:19:48.336 "large_cache_size": 16, 00:19:48.336 "task_count": 2048, 00:19:48.336 "sequence_count": 2048, 00:19:48.336 "buf_count": 2048 00:19:48.336 } 00:19:48.336 } 00:19:48.336 ] 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "subsystem": "bdev", 00:19:48.336 "config": [ 00:19:48.336 { 00:19:48.336 "method": "bdev_set_options", 00:19:48.336 "params": { 00:19:48.336 "bdev_io_pool_size": 65535, 00:19:48.336 "bdev_io_cache_size": 256, 00:19:48.336 "bdev_auto_examine": true, 00:19:48.336 "iobuf_small_cache_size": 128, 00:19:48.336 "iobuf_large_cache_size": 16 00:19:48.336 } 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "method": "bdev_raid_set_options", 00:19:48.336 "params": { 00:19:48.336 "process_window_size_kb": 1024, 00:19:48.336 "process_max_bandwidth_mb_sec": 0 00:19:48.336 } 00:19:48.336 }, 00:19:48.336 { 00:19:48.336 "method": "bdev_iscsi_set_options", 00:19:48.336 "params": { 00:19:48.336 "timeout_sec": 30 00:19:48.336 } 00:19:48.336 }, 00:19:48.337 { 00:19:48.337 "method": "bdev_nvme_set_options", 00:19:48.337 "params": { 00:19:48.337 "action_on_timeout": "none", 00:19:48.337 "timeout_us": 0, 00:19:48.337 "timeout_admin_us": 0, 00:19:48.337 "keep_alive_timeout_ms": 10000, 00:19:48.337 "arbitration_burst": 0, 00:19:48.337 "low_priority_weight": 0, 00:19:48.337 "medium_priority_weight": 0, 00:19:48.337 "high_priority_weight": 0, 00:19:48.337 "nvme_adminq_poll_period_us": 10000, 00:19:48.337 "nvme_ioq_poll_period_us": 0, 00:19:48.337 "io_queue_requests": 0, 00:19:48.337 "delay_cmd_submit": true, 00:19:48.337 "transport_retry_count": 4, 00:19:48.337 "bdev_retry_count": 3, 00:19:48.337 "transport_ack_timeout": 0, 00:19:48.337 "ctrlr_loss_timeout_sec": 0, 00:19:48.337 "reconnect_delay_sec": 0, 00:19:48.337 "fast_io_fail_timeout_sec": 0, 00:19:48.337 "disable_auto_failback": false, 00:19:48.337 "generate_uuids": false, 00:19:48.337 "transport_tos": 0, 00:19:48.337 "nvme_error_stat": false, 00:19:48.337 "rdma_srq_size": 0, 00:19:48.337 "io_path_stat": false, 00:19:48.337 "allow_accel_sequence": false, 00:19:48.337 "rdma_max_cq_size": 0, 00:19:48.337 "rdma_cm_event_timeout_ms": 0, 00:19:48.337 "dhchap_digests": [ 00:19:48.337 "sha256", 00:19:48.337 "sha384", 00:19:48.337 "sha512" 00:19:48.337 ], 00:19:48.337 "dhchap_dhgroups": [ 00:19:48.337 "null", 00:19:48.337 "ffdhe2048", 00:19:48.337 "ffdhe3072", 00:19:48.337 "ffdhe4096", 00:19:48.337 "ffdhe6144", 00:19:48.337 "ffdhe8192" 00:19:48.337 ] 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "bdev_nvme_set_hotplug", 00:19:48.337 "params": { 00:19:48.337 "period_us": 100000, 00:19:48.337 "enable": false 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "bdev_malloc_create", 00:19:48.337 "params": { 00:19:48.337 "name": "malloc0", 00:19:48.337 "num_blocks": 8192, 00:19:48.337 "block_size": 4096, 00:19:48.337 "physical_block_size": 4096, 00:19:48.337 "uuid": "f57e6c0c-12b2-43d7-91af-00ad03be4a50", 00:19:48.337 "optimal_io_boundary": 0, 00:19:48.337 "md_size": 0, 00:19:48.337 "dif_type": 0, 00:19:48.337 "dif_is_head_of_md": false, 00:19:48.337 "dif_pi_format": 0 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "bdev_wait_for_examine" 00:19:48.337 } 00:19:48.337 ] 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "subsystem": "nbd", 00:19:48.337 "config": [] 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "subsystem": "scheduler", 00:19:48.337 "config": [ 00:19:48.337 { 00:19:48.337 "method": "framework_set_scheduler", 00:19:48.337 "params": { 00:19:48.337 "name": "static" 00:19:48.337 } 00:19:48.337 } 00:19:48.337 ] 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "subsystem": "nvmf", 00:19:48.337 "config": [ 00:19:48.337 { 00:19:48.337 "method": "nvmf_set_config", 00:19:48.337 "params": { 00:19:48.337 "discovery_filter": "match_any", 00:19:48.337 "admin_cmd_passthru": { 00:19:48.337 "identify_ctrlr": false 00:19:48.337 }, 00:19:48.337 "dhchap_digests": [ 00:19:48.337 "sha256", 00:19:48.337 "sha384", 00:19:48.337 "sha512" 00:19:48.337 ], 00:19:48.337 "dhchap_dhgroups": [ 00:19:48.337 "null", 00:19:48.337 "ffdhe2048", 00:19:48.337 "ffdhe3072", 00:19:48.337 "ffdhe4096", 00:19:48.337 "ffdhe6144", 00:19:48.337 "ffdhe8192" 00:19:48.337 ] 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "nvmf_set_max_subsystems", 00:19:48.337 "params": { 00:19:48.337 "max_subsystems": 1024 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "nvmf_set_crdt", 00:19:48.337 "params": { 00:19:48.337 "crdt1": 0, 00:19:48.337 "crdt2": 0, 00:19:48.337 "crdt3": 0 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "nvmf_create_transport", 00:19:48.337 "params": { 00:19:48.337 "trtype": "TCP", 00:19:48.337 "max_queue_depth": 128, 00:19:48.337 "max_io_qpairs_per_ctrlr": 127, 00:19:48.337 "in_capsule_data_size": 4096, 00:19:48.337 "max_io_size": 131072, 00:19:48.337 "io_unit_size": 131072, 00:19:48.337 "max_aq_depth": 128, 00:19:48.337 "num_shared_buffers": 511, 00:19:48.337 "buf_cache_size": 4294967295, 00:19:48.337 "dif_insert_or_strip": false, 00:19:48.337 "zcopy": false, 00:19:48.337 "c2h_success": false, 00:19:48.337 "sock_priority": 0, 00:19:48.337 "abort_timeout_sec": 1, 00:19:48.337 "ack_timeout": 0, 00:19:48.337 "data_wr_pool_size": 0 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "nvmf_create_subsystem", 00:19:48.337 "params": { 00:19:48.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.337 "allow_any_host": false, 00:19:48.337 "serial_number": "SPDK00000000000001", 00:19:48.337 "model_number": "SPDK bdev Controller", 00:19:48.337 "max_namespaces": 10, 00:19:48.337 "min_cntlid": 1, 00:19:48.337 "max_cntlid": 65519, 00:19:48.337 "ana_reporting": false 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "nvmf_subsystem_add_host", 00:19:48.337 "params": { 00:19:48.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.337 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.337 "psk": "key0" 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "nvmf_subsystem_add_ns", 00:19:48.337 "params": { 00:19:48.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.337 "namespace": { 00:19:48.337 "nsid": 1, 00:19:48.337 "bdev_name": "malloc0", 00:19:48.337 "nguid": "F57E6C0C12B243D791AF00AD03BE4A50", 00:19:48.337 "uuid": "f57e6c0c-12b2-43d7-91af-00ad03be4a50", 00:19:48.337 "no_auto_visible": false 00:19:48.337 } 00:19:48.337 } 00:19:48.337 }, 00:19:48.337 { 00:19:48.337 "method": "nvmf_subsystem_add_listener", 00:19:48.337 "params": { 00:19:48.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.337 "listen_address": { 00:19:48.337 "trtype": "TCP", 00:19:48.337 "adrfam": "IPv4", 00:19:48.337 "traddr": "10.0.0.2", 00:19:48.337 "trsvcid": "4420" 00:19:48.337 }, 00:19:48.337 "secure_channel": true 00:19:48.337 } 00:19:48.337 } 00:19:48.337 ] 00:19:48.337 } 00:19:48.337 ] 00:19:48.337 }' 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3760563 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3760563 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3760563 ']' 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.337 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.337 [2024-11-20 09:53:25.054341] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:48.337 [2024-11-20 09:53:25.054435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.337 [2024-11-20 09:53:25.124591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.337 [2024-11-20 09:53:25.180606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.338 [2024-11-20 09:53:25.180662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.338 [2024-11-20 09:53:25.180684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.338 [2024-11-20 09:53:25.180696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.338 [2024-11-20 09:53:25.180713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.338 [2024-11-20 09:53:25.181333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.596 [2024-11-20 09:53:25.414101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.596 [2024-11-20 09:53:25.446149] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:48.596 [2024-11-20 09:53:25.446415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.161 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.161 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:49.161 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.161 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.161 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3760711 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3760711 /var/tmp/bdevperf.sock 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3760711 ']' 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.419 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:49.419 "subsystems": [ 00:19:49.419 { 00:19:49.419 "subsystem": "keyring", 00:19:49.419 "config": [ 00:19:49.419 { 00:19:49.419 "method": "keyring_file_add_key", 00:19:49.419 "params": { 00:19:49.419 "name": "key0", 00:19:49.419 "path": "/tmp/tmp.pBzmfrvr1q" 00:19:49.419 } 00:19:49.419 } 00:19:49.419 ] 00:19:49.419 }, 00:19:49.419 { 00:19:49.419 "subsystem": "iobuf", 00:19:49.419 "config": [ 00:19:49.419 { 00:19:49.419 "method": "iobuf_set_options", 00:19:49.419 "params": { 00:19:49.419 "small_pool_count": 8192, 00:19:49.419 "large_pool_count": 1024, 00:19:49.420 "small_bufsize": 8192, 00:19:49.420 "large_bufsize": 135168, 00:19:49.420 "enable_numa": false 00:19:49.420 } 00:19:49.420 } 00:19:49.420 ] 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "subsystem": "sock", 00:19:49.420 "config": [ 00:19:49.420 { 00:19:49.420 "method": "sock_set_default_impl", 00:19:49.420 "params": { 00:19:49.420 "impl_name": "posix" 00:19:49.420 } 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "method": "sock_impl_set_options", 00:19:49.420 "params": { 00:19:49.420 "impl_name": "ssl", 00:19:49.420 "recv_buf_size": 4096, 00:19:49.420 "send_buf_size": 4096, 00:19:49.420 "enable_recv_pipe": true, 00:19:49.420 "enable_quickack": false, 00:19:49.420 "enable_placement_id": 0, 00:19:49.420 "enable_zerocopy_send_server": true, 00:19:49.420 "enable_zerocopy_send_client": false, 00:19:49.420 "zerocopy_threshold": 0, 00:19:49.420 "tls_version": 0, 00:19:49.420 "enable_ktls": false 00:19:49.420 } 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "method": "sock_impl_set_options", 00:19:49.420 "params": { 00:19:49.420 "impl_name": "posix", 00:19:49.420 "recv_buf_size": 2097152, 00:19:49.420 "send_buf_size": 2097152, 00:19:49.420 "enable_recv_pipe": true, 00:19:49.420 "enable_quickack": false, 00:19:49.420 "enable_placement_id": 0, 00:19:49.420 "enable_zerocopy_send_server": true, 00:19:49.420 "enable_zerocopy_send_client": false, 00:19:49.420 "zerocopy_threshold": 0, 00:19:49.420 "tls_version": 0, 00:19:49.420 "enable_ktls": false 00:19:49.420 } 00:19:49.420 } 00:19:49.420 ] 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "subsystem": "vmd", 00:19:49.420 "config": [] 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "subsystem": "accel", 00:19:49.420 "config": [ 00:19:49.420 { 00:19:49.420 "method": "accel_set_options", 00:19:49.420 "params": { 00:19:49.420 "small_cache_size": 128, 00:19:49.420 "large_cache_size": 16, 00:19:49.420 "task_count": 2048, 00:19:49.420 "sequence_count": 2048, 00:19:49.420 "buf_count": 2048 00:19:49.420 } 00:19:49.420 } 00:19:49.420 ] 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "subsystem": "bdev", 00:19:49.420 "config": [ 00:19:49.420 { 00:19:49.420 "method": "bdev_set_options", 00:19:49.420 "params": { 00:19:49.420 "bdev_io_pool_size": 65535, 00:19:49.420 "bdev_io_cache_size": 256, 00:19:49.420 "bdev_auto_examine": true, 00:19:49.420 "iobuf_small_cache_size": 128, 00:19:49.420 "iobuf_large_cache_size": 16 00:19:49.420 } 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "method": "bdev_raid_set_options", 00:19:49.420 "params": { 00:19:49.420 "process_window_size_kb": 1024, 00:19:49.420 "process_max_bandwidth_mb_sec": 0 00:19:49.420 } 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "method": "bdev_iscsi_set_options", 00:19:49.420 "params": { 00:19:49.420 "timeout_sec": 30 00:19:49.420 } 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "method": "bdev_nvme_set_options", 00:19:49.420 "params": { 00:19:49.420 "action_on_timeout": "none", 00:19:49.420 "timeout_us": 0, 00:19:49.420 "timeout_admin_us": 0, 00:19:49.420 "keep_alive_timeout_ms": 10000, 00:19:49.420 "arbitration_burst": 0, 00:19:49.420 "low_priority_weight": 0, 00:19:49.420 "medium_priority_weight": 0, 00:19:49.420 "high_priority_weight": 0, 00:19:49.420 "nvme_adminq_poll_period_us": 10000, 00:19:49.420 "nvme_ioq_poll_period_us": 0, 00:19:49.420 "io_queue_requests": 512, 00:19:49.420 "delay_cmd_submit": true, 00:19:49.420 "transport_retry_count": 4, 00:19:49.420 "bdev_retry_count": 3, 00:19:49.420 "transport_ack_timeout": 0, 00:19:49.420 "ctrlr_loss_timeout_sec": 0, 00:19:49.420 "reconnect_delay_sec": 0, 00:19:49.420 "fast_io_fail_timeout_sec": 0, 00:19:49.420 "disable_auto_failback": false, 00:19:49.420 "generate_uuids": false, 00:19:49.420 "transport_tos": 0, 00:19:49.420 "nvme_error_stat": false, 00:19:49.420 "rdma_srq_size": 0, 00:19:49.420 "io_path_stat": false, 00:19:49.420 "allow_accel_sequence": false, 00:19:49.420 "rdma_max_cq_size": 0, 00:19:49.420 "rdma_cm_event_timeout_ms": 0, 00:19:49.420 "dhchap_digests": [ 00:19:49.420 "sha256", 00:19:49.420 "sha384", 00:19:49.420 "sha512" 00:19:49.420 ], 00:19:49.420 "dhchap_dhgroups": [ 00:19:49.420 "null", 00:19:49.420 "ffdhe2048", 00:19:49.420 "ffdhe3072", 00:19:49.420 "ffdhe4096", 00:19:49.420 "ffdhe6144", 00:19:49.420 "ffdhe8192" 00:19:49.420 ] 00:19:49.420 } 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "method": "bdev_nvme_attach_controller", 00:19:49.420 "params": { 00:19:49.420 "name": "TLSTEST", 00:19:49.420 "trtype": "TCP", 00:19:49.420 "adrfam": "IPv4", 00:19:49.420 "traddr": "10.0.0.2", 00:19:49.420 "trsvcid": "4420", 00:19:49.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.420 "prchk_reftag": false, 00:19:49.420 "prchk_guard": false, 00:19:49.420 "ctrlr_loss_timeout_sec": 0, 00:19:49.420 "reconnect_delay_sec": 0, 00:19:49.420 "fast_io_fail_timeout_sec": 0, 00:19:49.420 "psk": "key0", 00:19:49.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.420 "hdgst": false, 00:19:49.420 "ddgst": false, 00:19:49.420 "multipath": "multipath" 00:19:49.420 } 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "method": "bdev_nvme_set_hotplug", 00:19:49.420 "params": { 00:19:49.420 "period_us": 100000, 00:19:49.420 "enable": false 00:19:49.420 } 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "method": "bdev_wait_for_examine" 00:19:49.420 } 00:19:49.420 ] 00:19:49.420 }, 00:19:49.420 { 00:19:49.420 "subsystem": "nbd", 00:19:49.420 "config": [] 00:19:49.420 } 00:19:49.420 ] 00:19:49.420 }' 00:19:49.420 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.420 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.420 [2024-11-20 09:53:26.130791] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:19:49.420 [2024-11-20 09:53:26.130881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760711 ] 00:19:49.420 [2024-11-20 09:53:26.196940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.420 [2024-11-20 09:53:26.254753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.679 [2024-11-20 09:53:26.435143] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.679 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.679 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:49.679 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:49.937 Running I/O for 10 seconds... 00:19:51.796 3276.00 IOPS, 12.80 MiB/s [2024-11-20T08:53:30.081Z] 3376.00 IOPS, 13.19 MiB/s [2024-11-20T08:53:31.014Z] 3407.67 IOPS, 13.31 MiB/s [2024-11-20T08:53:31.946Z] 3447.50 IOPS, 13.47 MiB/s [2024-11-20T08:53:32.879Z] 3450.20 IOPS, 13.48 MiB/s [2024-11-20T08:53:33.813Z] 3451.00 IOPS, 13.48 MiB/s [2024-11-20T08:53:34.746Z] 3456.86 IOPS, 13.50 MiB/s [2024-11-20T08:53:35.680Z] 3463.88 IOPS, 13.53 MiB/s [2024-11-20T08:53:37.109Z] 3466.00 IOPS, 13.54 MiB/s [2024-11-20T08:53:37.109Z] 3467.40 IOPS, 13.54 MiB/s 00:20:00.195 Latency(us) 00:20:00.195 [2024-11-20T08:53:37.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:00.195 Verification LBA range: start 0x0 length 0x2000 00:20:00.195 TLSTESTn1 : 10.02 3474.09 13.57 0.00 0.00 36788.17 6140.97 44661.57 00:20:00.195 [2024-11-20T08:53:37.109Z] =================================================================================================================== 00:20:00.195 [2024-11-20T08:53:37.109Z] Total : 3474.09 13.57 0.00 0.00 36788.17 6140.97 44661.57 00:20:00.195 { 00:20:00.195 "results": [ 00:20:00.195 { 00:20:00.195 "job": "TLSTESTn1", 00:20:00.195 "core_mask": "0x4", 00:20:00.195 "workload": "verify", 00:20:00.195 "status": "finished", 00:20:00.195 "verify_range": { 00:20:00.195 "start": 0, 00:20:00.195 "length": 8192 00:20:00.195 }, 00:20:00.195 "queue_depth": 128, 00:20:00.195 "io_size": 4096, 00:20:00.195 "runtime": 10.017309, 00:20:00.195 "iops": 3474.0867033252143, 00:20:00.195 "mibps": 13.570651184864118, 00:20:00.195 "io_failed": 0, 00:20:00.195 "io_timeout": 0, 00:20:00.195 "avg_latency_us": 36788.165846405005, 00:20:00.195 "min_latency_us": 6140.965925925926, 00:20:00.195 "max_latency_us": 44661.57037037037 00:20:00.195 } 00:20:00.195 ], 00:20:00.195 "core_count": 1 00:20:00.195 } 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3760711 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3760711 ']' 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3760711 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3760711 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3760711' 00:20:00.195 killing process with pid 3760711 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3760711 00:20:00.195 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.195 00:20:00.195 Latency(us) 00:20:00.195 [2024-11-20T08:53:37.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.195 [2024-11-20T08:53:37.109Z] =================================================================================================================== 00:20:00.195 [2024-11-20T08:53:37.109Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3760711 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3760563 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3760563 ']' 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3760563 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.195 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3760563 00:20:00.195 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:00.195 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:00.195 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3760563' 00:20:00.195 killing process with pid 3760563 00:20:00.196 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3760563 00:20:00.196 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3760563 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3761958 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3761958 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3761958 ']' 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.453 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.453 [2024-11-20 09:53:37.315978] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:00.453 [2024-11-20 09:53:37.316080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.712 [2024-11-20 09:53:37.391974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.712 [2024-11-20 09:53:37.448488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.712 [2024-11-20 09:53:37.448537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.712 [2024-11-20 09:53:37.448559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.712 [2024-11-20 09:53:37.448569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.712 [2024-11-20 09:53:37.448593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.712 [2024-11-20 09:53:37.449125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.pBzmfrvr1q 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pBzmfrvr1q 00:20:00.712 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.275 [2024-11-20 09:53:37.885822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.275 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:01.533 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.790 [2024-11-20 09:53:38.455454] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.790 [2024-11-20 09:53:38.455731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.790 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:02.047 malloc0 00:20:02.047 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:02.303 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:20:02.558 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3762301 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3762301 /var/tmp/bdevperf.sock 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3762301 ']' 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.814 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.814 [2024-11-20 09:53:39.590408] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:02.814 [2024-11-20 09:53:39.590506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762301 ] 00:20:02.814 [2024-11-20 09:53:39.656436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.814 [2024-11-20 09:53:39.713820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.071 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.071 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:03.071 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:20:03.328 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:03.585 [2024-11-20 09:53:40.352882] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.585 nvme0n1 00:20:03.585 09:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:03.843 Running I/O for 1 seconds... 00:20:04.777 3215.00 IOPS, 12.56 MiB/s 00:20:04.777 Latency(us) 00:20:04.777 [2024-11-20T08:53:41.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.777 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:04.777 Verification LBA range: start 0x0 length 0x2000 00:20:04.777 nvme0n1 : 1.02 3276.31 12.80 0.00 0.00 38734.61 9466.31 36117.62 00:20:04.777 [2024-11-20T08:53:41.691Z] =================================================================================================================== 00:20:04.777 [2024-11-20T08:53:41.691Z] Total : 3276.31 12.80 0.00 0.00 38734.61 9466.31 36117.62 00:20:04.777 { 00:20:04.777 "results": [ 00:20:04.777 { 00:20:04.777 "job": "nvme0n1", 00:20:04.777 "core_mask": "0x2", 00:20:04.777 "workload": "verify", 00:20:04.777 "status": "finished", 00:20:04.777 "verify_range": { 00:20:04.777 "start": 0, 00:20:04.777 "length": 8192 00:20:04.778 }, 00:20:04.778 "queue_depth": 128, 00:20:04.778 "io_size": 4096, 00:20:04.778 "runtime": 1.020355, 00:20:04.778 "iops": 3276.3106957872506, 00:20:04.778 "mibps": 12.798088655418947, 00:20:04.778 "io_failed": 0, 00:20:04.778 "io_timeout": 0, 00:20:04.778 "avg_latency_us": 38734.60695538494, 00:20:04.778 "min_latency_us": 9466.31111111111, 00:20:04.778 "max_latency_us": 36117.61777777778 00:20:04.778 } 00:20:04.778 ], 00:20:04.778 "core_count": 1 00:20:04.778 } 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3762301 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3762301 ']' 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3762301 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3762301 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3762301' 00:20:04.778 killing process with pid 3762301 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3762301 00:20:04.778 Received shutdown signal, test time was about 1.000000 seconds 00:20:04.778 00:20:04.778 Latency(us) 00:20:04.778 [2024-11-20T08:53:41.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.778 [2024-11-20T08:53:41.692Z] =================================================================================================================== 00:20:04.778 [2024-11-20T08:53:41.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.778 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3762301 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3761958 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3761958 ']' 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3761958 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3761958 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3761958' 00:20:05.035 killing process with pid 3761958 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3761958 00:20:05.035 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3761958 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3762600 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3762600 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3762600 ']' 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.292 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.292 [2024-11-20 09:53:42.173339] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:05.292 [2024-11-20 09:53:42.173439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.549 [2024-11-20 09:53:42.243987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.549 [2024-11-20 09:53:42.299717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.549 [2024-11-20 09:53:42.299772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.549 [2024-11-20 09:53:42.299795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.549 [2024-11-20 09:53:42.299806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.549 [2024-11-20 09:53:42.299815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.549 [2024-11-20 09:53:42.300392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.549 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.549 [2024-11-20 09:53:42.442942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.807 malloc0 00:20:05.807 [2024-11-20 09:53:42.475665] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.807 [2024-11-20 09:53:42.475894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3762626 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3762626 /var/tmp/bdevperf.sock 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3762626 ']' 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.807 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.807 [2024-11-20 09:53:42.546849] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:05.808 [2024-11-20 09:53:42.546908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762626 ] 00:20:05.808 [2024-11-20 09:53:42.611257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.808 [2024-11-20 09:53:42.668907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.066 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.066 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:06.066 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pBzmfrvr1q 00:20:06.323 09:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:06.581 [2024-11-20 09:53:43.329321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.581 nvme0n1 00:20:06.581 09:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.838 Running I/O for 1 seconds... 00:20:07.881 3506.00 IOPS, 13.70 MiB/s 00:20:07.881 Latency(us) 00:20:07.881 [2024-11-20T08:53:44.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.881 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:07.881 Verification LBA range: start 0x0 length 0x2000 00:20:07.881 nvme0n1 : 1.02 3550.37 13.87 0.00 0.00 35666.54 10097.40 30098.01 00:20:07.881 [2024-11-20T08:53:44.795Z] =================================================================================================================== 00:20:07.881 [2024-11-20T08:53:44.795Z] Total : 3550.37 13.87 0.00 0.00 35666.54 10097.40 30098.01 00:20:07.881 { 00:20:07.881 "results": [ 00:20:07.881 { 00:20:07.881 "job": "nvme0n1", 00:20:07.881 "core_mask": "0x2", 00:20:07.881 "workload": "verify", 00:20:07.881 "status": "finished", 00:20:07.881 "verify_range": { 00:20:07.881 "start": 0, 00:20:07.881 "length": 8192 00:20:07.881 }, 00:20:07.881 "queue_depth": 128, 00:20:07.881 "io_size": 4096, 00:20:07.881 "runtime": 1.023555, 00:20:07.881 "iops": 3550.3710108396717, 00:20:07.881 "mibps": 13.868636761092468, 00:20:07.881 "io_failed": 0, 00:20:07.881 "io_timeout": 0, 00:20:07.881 "avg_latency_us": 35666.537626938996, 00:20:07.881 "min_latency_us": 10097.39851851852, 00:20:07.881 "max_latency_us": 30098.014814814815 00:20:07.881 } 00:20:07.881 ], 00:20:07.881 "core_count": 1 00:20:07.881 } 00:20:07.881 09:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:07.881 09:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.881 09:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.881 09:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.881 09:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:07.881 "subsystems": [ 00:20:07.881 { 00:20:07.881 "subsystem": "keyring", 00:20:07.881 "config": [ 00:20:07.881 { 00:20:07.881 "method": "keyring_file_add_key", 00:20:07.881 "params": { 00:20:07.881 "name": "key0", 00:20:07.881 "path": "/tmp/tmp.pBzmfrvr1q" 00:20:07.881 } 00:20:07.881 } 00:20:07.881 ] 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "subsystem": "iobuf", 00:20:07.881 "config": [ 00:20:07.881 { 00:20:07.881 "method": "iobuf_set_options", 00:20:07.881 "params": { 00:20:07.881 "small_pool_count": 8192, 00:20:07.881 "large_pool_count": 1024, 00:20:07.881 "small_bufsize": 8192, 00:20:07.881 "large_bufsize": 135168, 00:20:07.881 "enable_numa": false 00:20:07.881 } 00:20:07.881 } 00:20:07.881 ] 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "subsystem": "sock", 00:20:07.881 "config": [ 00:20:07.881 { 00:20:07.881 "method": "sock_set_default_impl", 00:20:07.881 "params": { 00:20:07.881 "impl_name": "posix" 00:20:07.881 } 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "method": "sock_impl_set_options", 00:20:07.881 "params": { 00:20:07.881 "impl_name": "ssl", 00:20:07.881 "recv_buf_size": 4096, 00:20:07.881 "send_buf_size": 4096, 00:20:07.881 "enable_recv_pipe": true, 00:20:07.881 "enable_quickack": false, 00:20:07.881 "enable_placement_id": 0, 00:20:07.881 "enable_zerocopy_send_server": true, 00:20:07.881 "enable_zerocopy_send_client": false, 00:20:07.881 "zerocopy_threshold": 0, 00:20:07.881 "tls_version": 0, 00:20:07.881 "enable_ktls": false 00:20:07.881 } 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "method": "sock_impl_set_options", 00:20:07.881 "params": { 00:20:07.881 "impl_name": "posix", 00:20:07.881 "recv_buf_size": 2097152, 00:20:07.881 "send_buf_size": 2097152, 00:20:07.881 "enable_recv_pipe": true, 00:20:07.881 "enable_quickack": false, 00:20:07.881 "enable_placement_id": 0, 00:20:07.881 "enable_zerocopy_send_server": true, 00:20:07.881 "enable_zerocopy_send_client": false, 00:20:07.881 "zerocopy_threshold": 0, 00:20:07.881 "tls_version": 0, 00:20:07.881 "enable_ktls": false 00:20:07.881 } 00:20:07.881 } 00:20:07.881 ] 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "subsystem": "vmd", 00:20:07.881 "config": [] 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "subsystem": "accel", 00:20:07.881 "config": [ 00:20:07.881 { 00:20:07.881 "method": "accel_set_options", 00:20:07.881 "params": { 00:20:07.881 "small_cache_size": 128, 00:20:07.881 "large_cache_size": 16, 00:20:07.881 "task_count": 2048, 00:20:07.881 "sequence_count": 2048, 00:20:07.881 "buf_count": 2048 00:20:07.881 } 00:20:07.881 } 00:20:07.881 ] 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "subsystem": "bdev", 00:20:07.881 "config": [ 00:20:07.881 { 00:20:07.881 "method": "bdev_set_options", 00:20:07.881 "params": { 00:20:07.881 "bdev_io_pool_size": 65535, 00:20:07.881 "bdev_io_cache_size": 256, 00:20:07.881 "bdev_auto_examine": true, 00:20:07.881 "iobuf_small_cache_size": 128, 00:20:07.881 "iobuf_large_cache_size": 16 00:20:07.881 } 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "method": "bdev_raid_set_options", 00:20:07.881 "params": { 00:20:07.881 "process_window_size_kb": 1024, 00:20:07.881 "process_max_bandwidth_mb_sec": 0 00:20:07.881 } 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "method": "bdev_iscsi_set_options", 00:20:07.881 "params": { 00:20:07.881 "timeout_sec": 30 00:20:07.881 } 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "method": "bdev_nvme_set_options", 00:20:07.881 "params": { 00:20:07.881 "action_on_timeout": "none", 00:20:07.881 "timeout_us": 0, 00:20:07.881 "timeout_admin_us": 0, 00:20:07.881 "keep_alive_timeout_ms": 10000, 00:20:07.881 "arbitration_burst": 0, 00:20:07.881 "low_priority_weight": 0, 00:20:07.881 "medium_priority_weight": 0, 00:20:07.881 "high_priority_weight": 0, 00:20:07.881 "nvme_adminq_poll_period_us": 10000, 00:20:07.881 "nvme_ioq_poll_period_us": 0, 00:20:07.881 "io_queue_requests": 0, 00:20:07.881 "delay_cmd_submit": true, 00:20:07.881 "transport_retry_count": 4, 00:20:07.881 "bdev_retry_count": 3, 00:20:07.881 "transport_ack_timeout": 0, 00:20:07.881 "ctrlr_loss_timeout_sec": 0, 00:20:07.881 "reconnect_delay_sec": 0, 00:20:07.881 "fast_io_fail_timeout_sec": 0, 00:20:07.881 "disable_auto_failback": false, 00:20:07.881 "generate_uuids": false, 00:20:07.881 "transport_tos": 0, 00:20:07.881 "nvme_error_stat": false, 00:20:07.881 "rdma_srq_size": 0, 00:20:07.881 "io_path_stat": false, 00:20:07.881 "allow_accel_sequence": false, 00:20:07.881 "rdma_max_cq_size": 0, 00:20:07.881 "rdma_cm_event_timeout_ms": 0, 00:20:07.881 "dhchap_digests": [ 00:20:07.881 "sha256", 00:20:07.881 "sha384", 00:20:07.881 "sha512" 00:20:07.881 ], 00:20:07.881 "dhchap_dhgroups": [ 00:20:07.881 "null", 00:20:07.881 "ffdhe2048", 00:20:07.881 "ffdhe3072", 00:20:07.881 "ffdhe4096", 00:20:07.881 "ffdhe6144", 00:20:07.881 "ffdhe8192" 00:20:07.881 ] 00:20:07.881 } 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "method": "bdev_nvme_set_hotplug", 00:20:07.881 "params": { 00:20:07.881 "period_us": 100000, 00:20:07.881 "enable": false 00:20:07.881 } 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "method": "bdev_malloc_create", 00:20:07.881 "params": { 00:20:07.881 "name": "malloc0", 00:20:07.881 "num_blocks": 8192, 00:20:07.881 "block_size": 4096, 00:20:07.881 "physical_block_size": 4096, 00:20:07.881 "uuid": "f4f9a5ff-27fb-41b5-bca8-55d9dde3f85c", 00:20:07.881 "optimal_io_boundary": 0, 00:20:07.881 "md_size": 0, 00:20:07.881 "dif_type": 0, 00:20:07.881 "dif_is_head_of_md": false, 00:20:07.881 "dif_pi_format": 0 00:20:07.881 } 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "method": "bdev_wait_for_examine" 00:20:07.881 } 00:20:07.881 ] 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "subsystem": "nbd", 00:20:07.881 "config": [] 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "subsystem": "scheduler", 00:20:07.881 "config": [ 00:20:07.881 { 00:20:07.881 "method": "framework_set_scheduler", 00:20:07.881 "params": { 00:20:07.881 "name": "static" 00:20:07.881 } 00:20:07.881 } 00:20:07.881 ] 00:20:07.881 }, 00:20:07.881 { 00:20:07.881 "subsystem": "nvmf", 00:20:07.881 "config": [ 00:20:07.881 { 00:20:07.881 "method": "nvmf_set_config", 00:20:07.881 "params": { 00:20:07.882 "discovery_filter": "match_any", 00:20:07.882 "admin_cmd_passthru": { 00:20:07.882 "identify_ctrlr": false 00:20:07.882 }, 00:20:07.882 "dhchap_digests": [ 00:20:07.882 "sha256", 00:20:07.882 "sha384", 00:20:07.882 "sha512" 00:20:07.882 ], 00:20:07.882 "dhchap_dhgroups": [ 00:20:07.882 "null", 00:20:07.882 "ffdhe2048", 00:20:07.882 "ffdhe3072", 00:20:07.882 "ffdhe4096", 00:20:07.882 "ffdhe6144", 00:20:07.882 "ffdhe8192" 00:20:07.882 ] 00:20:07.882 } 00:20:07.882 }, 00:20:07.882 { 00:20:07.882 "method": "nvmf_set_max_subsystems", 00:20:07.882 "params": { 00:20:07.882 "max_subsystems": 1024 00:20:07.882 } 00:20:07.882 }, 00:20:07.882 { 00:20:07.882 "method": "nvmf_set_crdt", 00:20:07.882 "params": { 00:20:07.882 "crdt1": 0, 00:20:07.882 "crdt2": 0, 00:20:07.882 "crdt3": 0 00:20:07.882 } 00:20:07.882 }, 00:20:07.882 { 00:20:07.882 "method": "nvmf_create_transport", 00:20:07.882 "params": { 00:20:07.882 "trtype": "TCP", 00:20:07.882 "max_queue_depth": 128, 00:20:07.882 "max_io_qpairs_per_ctrlr": 127, 00:20:07.882 "in_capsule_data_size": 4096, 00:20:07.882 "max_io_size": 131072, 00:20:07.882 "io_unit_size": 131072, 00:20:07.882 "max_aq_depth": 128, 00:20:07.882 "num_shared_buffers": 511, 00:20:07.882 "buf_cache_size": 4294967295, 00:20:07.882 "dif_insert_or_strip": false, 00:20:07.882 "zcopy": false, 00:20:07.882 "c2h_success": false, 00:20:07.882 "sock_priority": 0, 00:20:07.882 "abort_timeout_sec": 1, 00:20:07.882 "ack_timeout": 0, 00:20:07.882 "data_wr_pool_size": 0 00:20:07.882 } 00:20:07.882 }, 00:20:07.882 { 00:20:07.882 "method": "nvmf_create_subsystem", 00:20:07.882 "params": { 00:20:07.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.882 "allow_any_host": false, 00:20:07.882 "serial_number": "00000000000000000000", 00:20:07.882 "model_number": "SPDK bdev Controller", 00:20:07.882 "max_namespaces": 32, 00:20:07.882 "min_cntlid": 1, 00:20:07.882 "max_cntlid": 65519, 00:20:07.882 "ana_reporting": false 00:20:07.882 } 00:20:07.882 }, 00:20:07.882 { 00:20:07.882 "method": "nvmf_subsystem_add_host", 00:20:07.882 "params": { 00:20:07.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.882 "host": "nqn.2016-06.io.spdk:host1", 00:20:07.882 "psk": "key0" 00:20:07.882 } 00:20:07.882 }, 00:20:07.882 { 00:20:07.882 "method": "nvmf_subsystem_add_ns", 00:20:07.882 "params": { 00:20:07.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.882 "namespace": { 00:20:07.882 "nsid": 1, 00:20:07.882 "bdev_name": "malloc0", 00:20:07.882 "nguid": "F4F9A5FF27FB41B5BCA855D9DDE3F85C", 00:20:07.882 "uuid": "f4f9a5ff-27fb-41b5-bca8-55d9dde3f85c", 00:20:07.882 "no_auto_visible": false 00:20:07.882 } 00:20:07.882 } 00:20:07.882 }, 00:20:07.882 { 00:20:07.882 "method": "nvmf_subsystem_add_listener", 00:20:07.882 "params": { 00:20:07.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.882 "listen_address": { 00:20:07.882 "trtype": "TCP", 00:20:07.882 "adrfam": "IPv4", 00:20:07.882 "traddr": "10.0.0.2", 00:20:07.882 "trsvcid": "4420" 00:20:07.882 }, 00:20:07.882 "secure_channel": false, 00:20:07.882 "sock_impl": "ssl" 00:20:07.882 } 00:20:07.882 } 00:20:07.882 ] 00:20:07.882 } 00:20:07.882 ] 00:20:07.882 }' 00:20:07.882 09:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:08.140 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:08.140 "subsystems": [ 00:20:08.140 { 00:20:08.140 "subsystem": "keyring", 00:20:08.140 "config": [ 00:20:08.140 { 00:20:08.140 "method": "keyring_file_add_key", 00:20:08.140 "params": { 00:20:08.140 "name": "key0", 00:20:08.140 "path": "/tmp/tmp.pBzmfrvr1q" 00:20:08.140 } 00:20:08.140 } 00:20:08.140 ] 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "subsystem": "iobuf", 00:20:08.140 "config": [ 00:20:08.140 { 00:20:08.140 "method": "iobuf_set_options", 00:20:08.140 "params": { 00:20:08.140 "small_pool_count": 8192, 00:20:08.140 "large_pool_count": 1024, 00:20:08.140 "small_bufsize": 8192, 00:20:08.140 "large_bufsize": 135168, 00:20:08.140 "enable_numa": false 00:20:08.140 } 00:20:08.140 } 00:20:08.140 ] 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "subsystem": "sock", 00:20:08.140 "config": [ 00:20:08.140 { 00:20:08.140 "method": "sock_set_default_impl", 00:20:08.140 "params": { 00:20:08.140 "impl_name": "posix" 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "sock_impl_set_options", 00:20:08.140 "params": { 00:20:08.140 "impl_name": "ssl", 00:20:08.140 "recv_buf_size": 4096, 00:20:08.140 "send_buf_size": 4096, 00:20:08.140 "enable_recv_pipe": true, 00:20:08.140 "enable_quickack": false, 00:20:08.140 "enable_placement_id": 0, 00:20:08.140 "enable_zerocopy_send_server": true, 00:20:08.140 "enable_zerocopy_send_client": false, 00:20:08.140 "zerocopy_threshold": 0, 00:20:08.140 "tls_version": 0, 00:20:08.140 "enable_ktls": false 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "sock_impl_set_options", 00:20:08.140 "params": { 00:20:08.140 "impl_name": "posix", 00:20:08.140 "recv_buf_size": 2097152, 00:20:08.140 "send_buf_size": 2097152, 00:20:08.140 "enable_recv_pipe": true, 00:20:08.140 "enable_quickack": false, 00:20:08.140 "enable_placement_id": 0, 00:20:08.140 "enable_zerocopy_send_server": true, 00:20:08.140 "enable_zerocopy_send_client": false, 00:20:08.140 "zerocopy_threshold": 0, 00:20:08.140 "tls_version": 0, 00:20:08.140 "enable_ktls": false 00:20:08.140 } 00:20:08.140 } 00:20:08.140 ] 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "subsystem": "vmd", 00:20:08.140 "config": [] 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "subsystem": "accel", 00:20:08.140 "config": [ 00:20:08.140 { 00:20:08.140 "method": "accel_set_options", 00:20:08.140 "params": { 00:20:08.140 "small_cache_size": 128, 00:20:08.140 "large_cache_size": 16, 00:20:08.140 "task_count": 2048, 00:20:08.140 "sequence_count": 2048, 00:20:08.140 "buf_count": 2048 00:20:08.140 } 00:20:08.140 } 00:20:08.140 ] 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "subsystem": "bdev", 00:20:08.140 "config": [ 00:20:08.140 { 00:20:08.140 "method": "bdev_set_options", 00:20:08.140 "params": { 00:20:08.140 "bdev_io_pool_size": 65535, 00:20:08.140 "bdev_io_cache_size": 256, 00:20:08.140 "bdev_auto_examine": true, 00:20:08.140 "iobuf_small_cache_size": 128, 00:20:08.140 "iobuf_large_cache_size": 16 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "bdev_raid_set_options", 00:20:08.140 "params": { 00:20:08.140 "process_window_size_kb": 1024, 00:20:08.140 "process_max_bandwidth_mb_sec": 0 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "bdev_iscsi_set_options", 00:20:08.140 "params": { 00:20:08.140 "timeout_sec": 30 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "bdev_nvme_set_options", 00:20:08.140 "params": { 00:20:08.140 "action_on_timeout": "none", 00:20:08.140 "timeout_us": 0, 00:20:08.140 "timeout_admin_us": 0, 00:20:08.140 "keep_alive_timeout_ms": 10000, 00:20:08.140 "arbitration_burst": 0, 00:20:08.140 "low_priority_weight": 0, 00:20:08.140 "medium_priority_weight": 0, 00:20:08.140 "high_priority_weight": 0, 00:20:08.140 "nvme_adminq_poll_period_us": 10000, 00:20:08.140 "nvme_ioq_poll_period_us": 0, 00:20:08.140 "io_queue_requests": 512, 00:20:08.140 "delay_cmd_submit": true, 00:20:08.140 "transport_retry_count": 4, 00:20:08.140 "bdev_retry_count": 3, 00:20:08.140 "transport_ack_timeout": 0, 00:20:08.140 "ctrlr_loss_timeout_sec": 0, 00:20:08.140 "reconnect_delay_sec": 0, 00:20:08.140 "fast_io_fail_timeout_sec": 0, 00:20:08.140 "disable_auto_failback": false, 00:20:08.140 "generate_uuids": false, 00:20:08.140 "transport_tos": 0, 00:20:08.140 "nvme_error_stat": false, 00:20:08.140 "rdma_srq_size": 0, 00:20:08.140 "io_path_stat": false, 00:20:08.140 "allow_accel_sequence": false, 00:20:08.140 "rdma_max_cq_size": 0, 00:20:08.140 "rdma_cm_event_timeout_ms": 0, 00:20:08.140 "dhchap_digests": [ 00:20:08.140 "sha256", 00:20:08.140 "sha384", 00:20:08.140 "sha512" 00:20:08.140 ], 00:20:08.140 "dhchap_dhgroups": [ 00:20:08.140 "null", 00:20:08.140 "ffdhe2048", 00:20:08.140 "ffdhe3072", 00:20:08.140 "ffdhe4096", 00:20:08.140 "ffdhe6144", 00:20:08.140 "ffdhe8192" 00:20:08.140 ] 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "bdev_nvme_attach_controller", 00:20:08.140 "params": { 00:20:08.140 "name": "nvme0", 00:20:08.140 "trtype": "TCP", 00:20:08.140 "adrfam": "IPv4", 00:20:08.140 "traddr": "10.0.0.2", 00:20:08.140 "trsvcid": "4420", 00:20:08.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.140 "prchk_reftag": false, 00:20:08.140 "prchk_guard": false, 00:20:08.140 "ctrlr_loss_timeout_sec": 0, 00:20:08.140 "reconnect_delay_sec": 0, 00:20:08.140 "fast_io_fail_timeout_sec": 0, 00:20:08.140 "psk": "key0", 00:20:08.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.140 "hdgst": false, 00:20:08.140 "ddgst": false, 00:20:08.140 "multipath": "multipath" 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "bdev_nvme_set_hotplug", 00:20:08.140 "params": { 00:20:08.140 "period_us": 100000, 00:20:08.140 "enable": false 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "bdev_enable_histogram", 00:20:08.140 "params": { 00:20:08.140 "name": "nvme0n1", 00:20:08.140 "enable": true 00:20:08.140 } 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "method": "bdev_wait_for_examine" 00:20:08.140 } 00:20:08.140 ] 00:20:08.140 }, 00:20:08.140 { 00:20:08.140 "subsystem": "nbd", 00:20:08.140 "config": [] 00:20:08.140 } 00:20:08.140 ] 00:20:08.140 }' 00:20:08.140 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3762626 00:20:08.140 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3762626 ']' 00:20:08.141 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3762626 00:20:08.141 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:08.141 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3762626 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3762626' 00:20:08.399 killing process with pid 3762626 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3762626 00:20:08.399 Received shutdown signal, test time was about 1.000000 seconds 00:20:08.399 00:20:08.399 Latency(us) 00:20:08.399 [2024-11-20T08:53:45.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.399 [2024-11-20T08:53:45.313Z] =================================================================================================================== 00:20:08.399 [2024-11-20T08:53:45.313Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3762626 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3762600 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3762600 ']' 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3762600 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.399 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3762600 00:20:08.655 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.655 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.655 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3762600' 00:20:08.655 killing process with pid 3762600 00:20:08.656 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3762600 00:20:08.656 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3762600 00:20:08.938 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:08.938 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.938 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:08.938 "subsystems": [ 00:20:08.938 { 00:20:08.938 "subsystem": "keyring", 00:20:08.938 "config": [ 00:20:08.938 { 00:20:08.938 "method": "keyring_file_add_key", 00:20:08.938 "params": { 00:20:08.938 "name": "key0", 00:20:08.938 "path": "/tmp/tmp.pBzmfrvr1q" 00:20:08.938 } 00:20:08.938 } 00:20:08.938 ] 00:20:08.938 }, 00:20:08.938 { 00:20:08.938 "subsystem": "iobuf", 00:20:08.938 "config": [ 00:20:08.938 { 00:20:08.938 "method": "iobuf_set_options", 00:20:08.938 "params": { 00:20:08.938 "small_pool_count": 8192, 00:20:08.938 "large_pool_count": 1024, 00:20:08.938 "small_bufsize": 8192, 00:20:08.938 "large_bufsize": 135168, 00:20:08.938 "enable_numa": false 00:20:08.938 } 00:20:08.938 } 00:20:08.938 ] 00:20:08.938 }, 00:20:08.938 { 00:20:08.938 "subsystem": "sock", 00:20:08.938 "config": [ 00:20:08.938 { 00:20:08.939 "method": "sock_set_default_impl", 00:20:08.939 "params": { 00:20:08.939 "impl_name": "posix" 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "sock_impl_set_options", 00:20:08.939 "params": { 00:20:08.939 "impl_name": "ssl", 00:20:08.939 "recv_buf_size": 4096, 00:20:08.939 "send_buf_size": 4096, 00:20:08.939 "enable_recv_pipe": true, 00:20:08.939 "enable_quickack": false, 00:20:08.939 "enable_placement_id": 0, 00:20:08.939 "enable_zerocopy_send_server": true, 00:20:08.939 "enable_zerocopy_send_client": false, 00:20:08.939 "zerocopy_threshold": 0, 00:20:08.939 "tls_version": 0, 00:20:08.939 "enable_ktls": false 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "sock_impl_set_options", 00:20:08.939 "params": { 00:20:08.939 "impl_name": "posix", 00:20:08.939 "recv_buf_size": 2097152, 00:20:08.939 "send_buf_size": 2097152, 00:20:08.939 "enable_recv_pipe": true, 00:20:08.939 "enable_quickack": false, 00:20:08.939 "enable_placement_id": 0, 00:20:08.939 "enable_zerocopy_send_server": true, 00:20:08.939 "enable_zerocopy_send_client": false, 00:20:08.939 "zerocopy_threshold": 0, 00:20:08.939 "tls_version": 0, 00:20:08.939 "enable_ktls": false 00:20:08.939 } 00:20:08.939 } 00:20:08.939 ] 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "subsystem": "vmd", 00:20:08.939 "config": [] 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "subsystem": "accel", 00:20:08.939 "config": [ 00:20:08.939 { 00:20:08.939 "method": "accel_set_options", 00:20:08.939 "params": { 00:20:08.939 "small_cache_size": 128, 00:20:08.939 "large_cache_size": 16, 00:20:08.939 "task_count": 2048, 00:20:08.939 "sequence_count": 2048, 00:20:08.939 "buf_count": 2048 00:20:08.939 } 00:20:08.939 } 00:20:08.939 ] 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "subsystem": "bdev", 00:20:08.939 "config": [ 00:20:08.939 { 00:20:08.939 "method": "bdev_set_options", 00:20:08.939 "params": { 00:20:08.939 "bdev_io_pool_size": 65535, 00:20:08.939 "bdev_io_cache_size": 256, 00:20:08.939 "bdev_auto_examine": true, 00:20:08.939 "iobuf_small_cache_size": 128, 00:20:08.939 "iobuf_large_cache_size": 16 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "bdev_raid_set_options", 00:20:08.939 "params": { 00:20:08.939 "process_window_size_kb": 1024, 00:20:08.939 "process_max_bandwidth_mb_sec": 0 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "bdev_iscsi_set_options", 00:20:08.939 "params": { 00:20:08.939 "timeout_sec": 30 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "bdev_nvme_set_options", 00:20:08.939 "params": { 00:20:08.939 "action_on_timeout": "none", 00:20:08.939 "timeout_us": 0, 00:20:08.939 "timeout_admin_us": 0, 00:20:08.939 "keep_alive_timeout_ms": 10000, 00:20:08.939 "arbitration_burst": 0, 00:20:08.939 "low_priority_weight": 0, 00:20:08.939 "medium_priority_weight": 0, 00:20:08.939 "high_priority_weight": 0, 00:20:08.939 "nvme_adminq_poll_period_us": 10000, 00:20:08.939 "nvme_ioq_poll_period_us": 0, 00:20:08.939 "io_queue_requests": 0, 00:20:08.939 "delay_cmd_submit": true, 00:20:08.939 "transport_retry_count": 4, 00:20:08.939 "bdev_retry_count": 3, 00:20:08.939 "transport_ack_timeout": 0, 00:20:08.939 "ctrlr_loss_timeout_sec": 0, 00:20:08.939 "reconnect_delay_sec": 0, 00:20:08.939 "fast_io_fail_timeout_sec": 0, 00:20:08.939 "disable_auto_failback": false, 00:20:08.939 "generate_uuids": false, 00:20:08.939 "transport_tos": 0, 00:20:08.939 "nvme_error_stat": false, 00:20:08.939 "rdma_srq_size": 0, 00:20:08.939 "io_path_stat": false, 00:20:08.939 "allow_accel_sequence": false, 00:20:08.939 "rdma_max_cq_size": 0, 00:20:08.939 "rdma_cm_event_timeout_ms": 0, 00:20:08.939 "dhchap_digests": [ 00:20:08.939 "sha256", 00:20:08.939 "sha384", 00:20:08.939 "sha512" 00:20:08.939 ], 00:20:08.939 "dhchap_dhgroups": [ 00:20:08.939 "null", 00:20:08.939 "ffdhe2048", 00:20:08.939 "ffdhe3072", 00:20:08.939 "ffdhe4096", 00:20:08.939 "ffdhe6144", 00:20:08.939 "ffdhe8192" 00:20:08.939 ] 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "bdev_nvme_set_hotplug", 00:20:08.939 "params": { 00:20:08.939 "period_us": 100000, 00:20:08.939 "enable": false 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "bdev_malloc_create", 00:20:08.939 "params": { 00:20:08.939 "name": "malloc0", 00:20:08.939 "num_blocks": 8192, 00:20:08.939 "block_size": 4096, 00:20:08.939 "physical_block_size": 4096, 00:20:08.939 "uuid": "f4f9a5ff-27fb-41b5-bca8-55d9dde3f85c", 00:20:08.939 "optimal_io_boundary": 0, 00:20:08.939 "md_size": 0, 00:20:08.939 "dif_type": 0, 00:20:08.939 "dif_is_head_of_md": false, 00:20:08.939 "dif_pi_format": 0 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "bdev_wait_for_examine" 00:20:08.939 } 00:20:08.939 ] 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "subsystem": "nbd", 00:20:08.939 "config": [] 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "subsystem": "scheduler", 00:20:08.939 "config": [ 00:20:08.939 { 00:20:08.939 "method": "framework_set_scheduler", 00:20:08.939 "params": { 00:20:08.939 "name": "static" 00:20:08.939 } 00:20:08.939 } 00:20:08.939 ] 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "subsystem": "nvmf", 00:20:08.939 "config": [ 00:20:08.939 { 00:20:08.939 "method": "nvmf_set_config", 00:20:08.939 "params": { 00:20:08.939 "discovery_filter": "match_any", 00:20:08.939 "admin_cmd_passthru": { 00:20:08.939 "identify_ctrlr": false 00:20:08.939 }, 00:20:08.939 "dhchap_digests": [ 00:20:08.939 "sha256", 00:20:08.939 "sha384", 00:20:08.939 "sha512" 00:20:08.939 ], 00:20:08.939 "dhchap_dhgroups": [ 00:20:08.939 "null", 00:20:08.939 "ffdhe2048", 00:20:08.939 "ffdhe3072", 00:20:08.939 "ffdhe4096", 00:20:08.939 "ffdhe6144", 00:20:08.939 "ffdhe8192" 00:20:08.939 ] 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "nvmf_set_max_subsystems", 00:20:08.939 "params": { 00:20:08.939 "max_subsystems": 1024 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "nvmf_set_crdt", 00:20:08.939 "params": { 00:20:08.939 "crdt1": 0, 00:20:08.939 "crdt2": 0, 00:20:08.939 "crdt3": 0 00:20:08.939 } 00:20:08.939 }, 00:20:08.939 { 00:20:08.939 "method": "nvmf_create_transport", 00:20:08.939 "params": { 00:20:08.939 "trtype": "TCP", 00:20:08.939 "max_queue_depth": 128, 00:20:08.939 "max_io_qpairs_per_ctrlr": 127, 00:20:08.939 "in_capsule_data_size": 4096, 00:20:08.939 "max_io_size": 131072, 00:20:08.939 "io_unit_size": 131072, 00:20:08.939 "max_aq_depth": 128, 00:20:08.939 "num_shared_buffers": 511, 00:20:08.939 "buf_cache_size": 4294967295, 00:20:08.939 "dif_insert_or_strip": false, 00:20:08.939 "zcopy": false, 00:20:08.939 "c2h_success": false, 00:20:08.939 "sock_priority": 0, 00:20:08.940 "abort_timeout_sec": 1, 00:20:08.940 "ack_timeout": 0, 00:20:08.940 "data_wr_pool_size": 0 00:20:08.940 } 00:20:08.940 }, 00:20:08.940 { 00:20:08.940 "method": "nvmf_create_subsystem", 00:20:08.940 "params": { 00:20:08.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.940 "allow_any_host": false, 00:20:08.940 "serial_number": "00000000000000000000", 00:20:08.940 "model_number": "SPDK bdev Controller", 00:20:08.940 "max_namespaces": 32, 00:20:08.940 "min_cntlid": 1, 00:20:08.940 "max_cntlid": 65519, 00:20:08.940 "ana_reporting": false 00:20:08.940 } 00:20:08.940 }, 00:20:08.940 { 00:20:08.940 "method": "nvmf_subsystem_add_host", 00:20:08.940 "params": { 00:20:08.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.940 "host": "nqn.2016-06.io.spdk:host1", 00:20:08.940 "psk": "key0" 00:20:08.940 } 00:20:08.940 }, 00:20:08.940 { 00:20:08.940 "method": "nvmf_subsystem_add_ns", 00:20:08.940 "params": { 00:20:08.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.940 "namespace": { 00:20:08.940 "nsid": 1, 00:20:08.940 "bdev_name": "malloc0", 00:20:08.940 "nguid": "F4F9A5FF27FB41B5BCA855D9DDE3F85C", 00:20:08.940 "uuid": "f4f9a5ff-27fb-41b5-bca8-55d9dde3f85c", 00:20:08.940 "no_auto_visible": false 00:20:08.940 } 00:20:08.940 } 00:20:08.940 }, 00:20:08.940 { 00:20:08.940 "method": "nvmf_subsystem_add_listener", 00:20:08.940 "params": { 00:20:08.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.940 "listen_address": { 00:20:08.940 "trtype": "TCP", 00:20:08.940 "adrfam": "IPv4", 00:20:08.940 "traddr": "10.0.0.2", 00:20:08.940 "trsvcid": "4420" 00:20:08.940 }, 00:20:08.940 "secure_channel": false, 00:20:08.940 "sock_impl": "ssl" 00:20:08.940 } 00:20:08.940 } 00:20:08.940 ] 00:20:08.940 } 00:20:08.940 ] 00:20:08.940 }' 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3763039 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3763039 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3763039 ']' 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.940 09:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.940 [2024-11-20 09:53:45.652419] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:08.940 [2024-11-20 09:53:45.652509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.940 [2024-11-20 09:53:45.734015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.940 [2024-11-20 09:53:45.793351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.940 [2024-11-20 09:53:45.793403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.940 [2024-11-20 09:53:45.793423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.940 [2024-11-20 09:53:45.793435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.940 [2024-11-20 09:53:45.793444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.940 [2024-11-20 09:53:45.794068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.199 [2024-11-20 09:53:46.037619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.199 [2024-11-20 09:53:46.069679] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.199 [2024-11-20 09:53:46.069887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3763189 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3763189 /var/tmp/bdevperf.sock 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3763189 ']' 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.833 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:09.833 "subsystems": [ 00:20:09.833 { 00:20:09.833 "subsystem": "keyring", 00:20:09.833 "config": [ 00:20:09.833 { 00:20:09.833 "method": "keyring_file_add_key", 00:20:09.833 "params": { 00:20:09.833 "name": "key0", 00:20:09.833 "path": "/tmp/tmp.pBzmfrvr1q" 00:20:09.833 } 00:20:09.833 } 00:20:09.833 ] 00:20:09.833 }, 00:20:09.833 { 00:20:09.833 "subsystem": "iobuf", 00:20:09.833 "config": [ 00:20:09.833 { 00:20:09.833 "method": "iobuf_set_options", 00:20:09.833 "params": { 00:20:09.833 "small_pool_count": 8192, 00:20:09.833 "large_pool_count": 1024, 00:20:09.833 "small_bufsize": 8192, 00:20:09.833 "large_bufsize": 135168, 00:20:09.833 "enable_numa": false 00:20:09.833 } 00:20:09.833 } 00:20:09.833 ] 00:20:09.833 }, 00:20:09.833 { 00:20:09.833 "subsystem": "sock", 00:20:09.833 "config": [ 00:20:09.833 { 00:20:09.833 "method": "sock_set_default_impl", 00:20:09.833 "params": { 00:20:09.833 "impl_name": "posix" 00:20:09.833 } 00:20:09.833 }, 00:20:09.833 { 00:20:09.833 "method": "sock_impl_set_options", 00:20:09.833 "params": { 00:20:09.833 "impl_name": "ssl", 00:20:09.833 "recv_buf_size": 4096, 00:20:09.833 "send_buf_size": 4096, 00:20:09.833 "enable_recv_pipe": true, 00:20:09.833 "enable_quickack": false, 00:20:09.833 "enable_placement_id": 0, 00:20:09.833 "enable_zerocopy_send_server": true, 00:20:09.833 "enable_zerocopy_send_client": false, 00:20:09.833 "zerocopy_threshold": 0, 00:20:09.833 "tls_version": 0, 00:20:09.833 "enable_ktls": false 00:20:09.833 } 00:20:09.833 }, 00:20:09.833 { 00:20:09.833 "method": "sock_impl_set_options", 00:20:09.833 "params": { 00:20:09.833 "impl_name": "posix", 00:20:09.833 "recv_buf_size": 2097152, 00:20:09.833 "send_buf_size": 2097152, 00:20:09.833 "enable_recv_pipe": true, 00:20:09.833 "enable_quickack": false, 00:20:09.833 "enable_placement_id": 0, 00:20:09.833 "enable_zerocopy_send_server": true, 00:20:09.833 "enable_zerocopy_send_client": false, 00:20:09.833 "zerocopy_threshold": 0, 00:20:09.833 "tls_version": 0, 00:20:09.833 "enable_ktls": false 00:20:09.833 } 00:20:09.833 } 00:20:09.833 ] 00:20:09.833 }, 00:20:09.833 { 00:20:09.833 "subsystem": "vmd", 00:20:09.833 "config": [] 00:20:09.833 }, 00:20:09.833 { 00:20:09.833 "subsystem": "accel", 00:20:09.833 "config": [ 00:20:09.833 { 00:20:09.833 "method": "accel_set_options", 00:20:09.833 "params": { 00:20:09.833 "small_cache_size": 128, 00:20:09.833 "large_cache_size": 16, 00:20:09.834 "task_count": 2048, 00:20:09.834 "sequence_count": 2048, 00:20:09.834 "buf_count": 2048 00:20:09.834 } 00:20:09.834 } 00:20:09.834 ] 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "subsystem": "bdev", 00:20:09.834 "config": [ 00:20:09.834 { 00:20:09.834 "method": "bdev_set_options", 00:20:09.834 "params": { 00:20:09.834 "bdev_io_pool_size": 65535, 00:20:09.834 "bdev_io_cache_size": 256, 00:20:09.834 "bdev_auto_examine": true, 00:20:09.834 "iobuf_small_cache_size": 128, 00:20:09.834 "iobuf_large_cache_size": 16 00:20:09.834 } 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "method": "bdev_raid_set_options", 00:20:09.834 "params": { 00:20:09.834 "process_window_size_kb": 1024, 00:20:09.834 "process_max_bandwidth_mb_sec": 0 00:20:09.834 } 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "method": "bdev_iscsi_set_options", 00:20:09.834 "params": { 00:20:09.834 "timeout_sec": 30 00:20:09.834 } 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "method": "bdev_nvme_set_options", 00:20:09.834 "params": { 00:20:09.834 "action_on_timeout": "none", 00:20:09.834 "timeout_us": 0, 00:20:09.834 "timeout_admin_us": 0, 00:20:09.834 "keep_alive_timeout_ms": 10000, 00:20:09.834 "arbitration_burst": 0, 00:20:09.834 "low_priority_weight": 0, 00:20:09.834 "medium_priority_weight": 0, 00:20:09.834 "high_priority_weight": 0, 00:20:09.834 "nvme_adminq_poll_period_us": 10000, 00:20:09.834 "nvme_ioq_poll_period_us": 0, 00:20:09.834 "io_queue_requests": 512, 00:20:09.834 "delay_cmd_submit": true, 00:20:09.834 "transport_retry_count": 4, 00:20:09.834 "bdev_retry_count": 3, 00:20:09.834 "transport_ack_timeout": 0, 00:20:09.834 "ctrlr_loss_timeout_sec": 0, 00:20:09.834 "reconnect_delay_sec": 0, 00:20:09.834 "fast_io_fail_timeout_sec": 0, 00:20:09.834 "disable_auto_failback": false, 00:20:09.834 "generate_uuids": false, 00:20:09.834 "transport_tos": 0, 00:20:09.834 "nvme_error_stat": false, 00:20:09.834 "rdma_srq_size": 0, 00:20:09.834 "io_path_stat": false, 00:20:09.834 "allow_accel_sequence": false, 00:20:09.834 "rdma_max_cq_size": 0, 00:20:09.834 "rdma_cm_event_timeout_ms": 0 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.834 , 00:20:09.834 "dhchap_digests": [ 00:20:09.834 "sha256", 00:20:09.834 "sha384", 00:20:09.834 "sha512" 00:20:09.834 ], 00:20:09.834 "dhchap_dhgroups": [ 00:20:09.834 "null", 00:20:09.834 "ffdhe2048", 00:20:09.834 "ffdhe3072", 00:20:09.834 "ffdhe4096", 00:20:09.834 "ffdhe6144", 00:20:09.834 "ffdhe8192" 00:20:09.834 ] 00:20:09.834 } 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "method": "bdev_nvme_attach_controller", 00:20:09.834 "params": { 00:20:09.834 "name": "nvme0", 00:20:09.834 "trtype": "TCP", 00:20:09.834 "adrfam": "IPv4", 00:20:09.834 "traddr": "10.0.0.2", 00:20:09.834 "trsvcid": "4420", 00:20:09.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.834 "prchk_reftag": false, 00:20:09.834 "prchk_guard": false, 00:20:09.834 "ctrlr_loss_timeout_sec": 0, 00:20:09.834 "reconnect_delay_sec": 0, 00:20:09.834 "fast_io_fail_timeout_sec": 0, 00:20:09.834 "psk": "key0", 00:20:09.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.834 "hdgst": false, 00:20:09.834 "ddgst": false, 00:20:09.834 "multipath": "multipath" 00:20:09.834 } 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "method": "bdev_nvme_set_hotplug", 00:20:09.834 "params": { 00:20:09.834 "period_us": 100000, 00:20:09.834 "enable": false 00:20:09.834 } 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "method": "bdev_enable_histogram", 00:20:09.834 "params": { 00:20:09.834 "name": "nvme0n1", 00:20:09.834 "enable": true 00:20:09.834 } 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "method": "bdev_wait_for_examine" 00:20:09.834 } 00:20:09.834 ] 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "subsystem": "nbd", 00:20:09.834 "config": [] 00:20:09.834 } 00:20:09.834 ] 00:20:09.834 }' 00:20:09.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.834 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.834 09:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.093 [2024-11-20 09:53:46.776891] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:10.093 [2024-11-20 09:53:46.776971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763189 ] 00:20:10.093 [2024-11-20 09:53:46.844863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.093 [2024-11-20 09:53:46.903751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.351 [2024-11-20 09:53:47.088540] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.916 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.916 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:10.916 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:10.916 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:11.174 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.174 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.431 Running I/O for 1 seconds... 00:20:12.367 3253.00 IOPS, 12.71 MiB/s 00:20:12.367 Latency(us) 00:20:12.367 [2024-11-20T08:53:49.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.367 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.367 Verification LBA range: start 0x0 length 0x2000 00:20:12.367 nvme0n1 : 1.02 3314.68 12.95 0.00 0.00 38272.48 7524.50 35729.26 00:20:12.367 [2024-11-20T08:53:49.281Z] =================================================================================================================== 00:20:12.367 [2024-11-20T08:53:49.281Z] Total : 3314.68 12.95 0.00 0.00 38272.48 7524.50 35729.26 00:20:12.367 { 00:20:12.367 "results": [ 00:20:12.367 { 00:20:12.367 "job": "nvme0n1", 00:20:12.367 "core_mask": "0x2", 00:20:12.367 "workload": "verify", 00:20:12.367 "status": "finished", 00:20:12.367 "verify_range": { 00:20:12.367 "start": 0, 00:20:12.367 "length": 8192 00:20:12.367 }, 00:20:12.367 "queue_depth": 128, 00:20:12.367 "io_size": 4096, 00:20:12.367 "runtime": 1.020311, 00:20:12.367 "iops": 3314.6756234128616, 00:20:12.367 "mibps": 12.94795165395649, 00:20:12.367 "io_failed": 0, 00:20:12.367 "io_timeout": 0, 00:20:12.367 "avg_latency_us": 38272.48141927853, 00:20:12.367 "min_latency_us": 7524.503703703704, 00:20:12.367 "max_latency_us": 35729.2562962963 00:20:12.367 } 00:20:12.367 ], 00:20:12.367 "core_count": 1 00:20:12.367 } 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:12.367 nvmf_trace.0 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3763189 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3763189 ']' 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3763189 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.367 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3763189 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3763189' 00:20:12.625 killing process with pid 3763189 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3763189 00:20:12.625 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.625 00:20:12.625 Latency(us) 00:20:12.625 [2024-11-20T08:53:49.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.625 [2024-11-20T08:53:49.539Z] =================================================================================================================== 00:20:12.625 [2024-11-20T08:53:49.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3763189 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.625 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.883 rmmod nvme_tcp 00:20:12.883 rmmod nvme_fabrics 00:20:12.883 rmmod nvme_keyring 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3763039 ']' 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3763039 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3763039 ']' 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3763039 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3763039 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3763039' 00:20:12.883 killing process with pid 3763039 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3763039 00:20:12.883 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3763039 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.143 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.053 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:15.053 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.odUnBvpkrd /tmp/tmp.REbmCQLKZt /tmp/tmp.pBzmfrvr1q 00:20:15.053 00:20:15.053 real 1m23.750s 00:20:15.053 user 2m21.339s 00:20:15.053 sys 0m24.738s 00:20:15.053 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.053 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.053 ************************************ 00:20:15.053 END TEST nvmf_tls 00:20:15.053 ************************************ 00:20:15.053 09:53:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.053 09:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:15.053 09:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.053 09:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:15.312 ************************************ 00:20:15.312 START TEST nvmf_fips 00:20:15.312 ************************************ 00:20:15.312 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.312 * Looking for test storage... 00:20:15.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:15.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.312 --rc genhtml_branch_coverage=1 00:20:15.312 --rc genhtml_function_coverage=1 00:20:15.312 --rc genhtml_legend=1 00:20:15.312 --rc geninfo_all_blocks=1 00:20:15.312 --rc geninfo_unexecuted_blocks=1 00:20:15.312 00:20:15.312 ' 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:15.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.312 --rc genhtml_branch_coverage=1 00:20:15.312 --rc genhtml_function_coverage=1 00:20:15.312 --rc genhtml_legend=1 00:20:15.312 --rc geninfo_all_blocks=1 00:20:15.312 --rc geninfo_unexecuted_blocks=1 00:20:15.312 00:20:15.312 ' 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:15.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.312 --rc genhtml_branch_coverage=1 00:20:15.312 --rc genhtml_function_coverage=1 00:20:15.312 --rc genhtml_legend=1 00:20:15.312 --rc geninfo_all_blocks=1 00:20:15.312 --rc geninfo_unexecuted_blocks=1 00:20:15.312 00:20:15.312 ' 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:15.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.312 --rc genhtml_branch_coverage=1 00:20:15.312 --rc genhtml_function_coverage=1 00:20:15.312 --rc genhtml_legend=1 00:20:15.312 --rc geninfo_all_blocks=1 00:20:15.312 --rc geninfo_unexecuted_blocks=1 00:20:15.312 00:20:15.312 ' 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.312 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:15.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.313 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:15.314 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:15.572 Error setting digest 00:20:15.572 4042FEC8897F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:15.572 4042FEC8897F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.572 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:17.477 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:17.477 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:17.477 Found net devices under 0000:09:00.0: cvl_0_0 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:17.477 Found net devices under 0000:09:00.1: cvl_0_1 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.477 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:17.478 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.735 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:17.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:20:17.736 00:20:17.736 --- 10.0.0.2 ping statistics --- 00:20:17.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.736 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:20:17.736 00:20:17.736 --- 10.0.0.1 ping statistics --- 00:20:17.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.736 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3765560 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3765560 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3765560 ']' 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.736 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.736 [2024-11-20 09:53:54.535449] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:17.736 [2024-11-20 09:53:54.535569] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.736 [2024-11-20 09:53:54.608415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.994 [2024-11-20 09:53:54.668225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.994 [2024-11-20 09:53:54.668274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.994 [2024-11-20 09:53:54.668288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.994 [2024-11-20 09:53:54.668299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.994 [2024-11-20 09:53:54.668318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.994 [2024-11-20 09:53:54.668917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.MOU 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.MOU 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.MOU 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.MOU 00:20:17.994 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:18.252 [2024-11-20 09:53:55.121887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.252 [2024-11-20 09:53:55.137909] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.252 [2024-11-20 09:53:55.138135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.510 malloc0 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3765586 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3765586 /var/tmp/bdevperf.sock 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3765586 ']' 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.510 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.510 [2024-11-20 09:53:55.273500] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:18.510 [2024-11-20 09:53:55.273617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765586 ] 00:20:18.510 [2024-11-20 09:53:55.341112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.510 [2024-11-20 09:53:55.404096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.768 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.768 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:18.768 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.MOU 00:20:19.025 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:19.282 [2024-11-20 09:53:56.022573] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.282 TLSTESTn1 00:20:19.282 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.539 Running I/O for 10 seconds... 00:20:21.404 3386.00 IOPS, 13.23 MiB/s [2024-11-20T08:53:59.252Z] 3358.50 IOPS, 13.12 MiB/s [2024-11-20T08:54:00.624Z] 3400.67 IOPS, 13.28 MiB/s [2024-11-20T08:54:01.557Z] 3450.50 IOPS, 13.48 MiB/s [2024-11-20T08:54:02.489Z] 3456.20 IOPS, 13.50 MiB/s [2024-11-20T08:54:03.420Z] 3464.67 IOPS, 13.53 MiB/s [2024-11-20T08:54:04.351Z] 3488.71 IOPS, 13.63 MiB/s [2024-11-20T08:54:05.284Z] 3476.38 IOPS, 13.58 MiB/s [2024-11-20T08:54:06.658Z] 3491.44 IOPS, 13.64 MiB/s [2024-11-20T08:54:06.658Z] 3505.60 IOPS, 13.69 MiB/s 00:20:29.744 Latency(us) 00:20:29.744 [2024-11-20T08:54:06.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.744 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:29.744 Verification LBA range: start 0x0 length 0x2000 00:20:29.744 TLSTESTn1 : 10.02 3511.46 13.72 0.00 0.00 36392.28 6310.87 48739.37 00:20:29.744 [2024-11-20T08:54:06.658Z] =================================================================================================================== 00:20:29.744 [2024-11-20T08:54:06.658Z] Total : 3511.46 13.72 0.00 0.00 36392.28 6310.87 48739.37 00:20:29.744 { 00:20:29.744 "results": [ 00:20:29.744 { 00:20:29.744 "job": "TLSTESTn1", 00:20:29.744 "core_mask": "0x4", 00:20:29.744 "workload": "verify", 00:20:29.744 "status": "finished", 00:20:29.744 "verify_range": { 00:20:29.744 "start": 0, 00:20:29.744 "length": 8192 00:20:29.744 }, 00:20:29.744 "queue_depth": 128, 00:20:29.744 "io_size": 4096, 00:20:29.744 "runtime": 10.019195, 00:20:29.744 "iops": 3511.4597530041087, 00:20:29.744 "mibps": 13.7166396601723, 00:20:29.744 "io_failed": 0, 00:20:29.744 "io_timeout": 0, 00:20:29.744 "avg_latency_us": 36392.28125291342, 00:20:29.744 "min_latency_us": 6310.874074074074, 00:20:29.744 "max_latency_us": 48739.36592592593 00:20:29.744 } 00:20:29.744 ], 00:20:29.744 "core_count": 1 00:20:29.744 } 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:29.744 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:29.745 nvmf_trace.0 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3765586 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3765586 ']' 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3765586 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3765586 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3765586' 00:20:29.745 killing process with pid 3765586 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3765586 00:20:29.745 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.745 00:20:29.745 Latency(us) 00:20:29.745 [2024-11-20T08:54:06.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.745 [2024-11-20T08:54:06.659Z] =================================================================================================================== 00:20:29.745 [2024-11-20T08:54:06.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3765586 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:29.745 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:29.745 rmmod nvme_tcp 00:20:29.745 rmmod nvme_fabrics 00:20:30.003 rmmod nvme_keyring 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3765560 ']' 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3765560 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3765560 ']' 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3765560 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3765560 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3765560' 00:20:30.003 killing process with pid 3765560 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3765560 00:20:30.003 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3765560 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:30.262 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:30.263 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.263 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.263 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.163 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:32.163 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.MOU 00:20:32.163 00:20:32.163 real 0m17.029s 00:20:32.163 user 0m22.613s 00:20:32.163 sys 0m5.426s 00:20:32.163 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.163 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.163 ************************************ 00:20:32.163 END TEST nvmf_fips 00:20:32.163 ************************************ 00:20:32.163 09:54:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:32.163 09:54:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:32.163 09:54:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.163 09:54:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:32.163 ************************************ 00:20:32.163 START TEST nvmf_control_msg_list 00:20:32.163 ************************************ 00:20:32.163 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:32.420 * Looking for test storage... 00:20:32.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:32.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.420 --rc genhtml_branch_coverage=1 00:20:32.420 --rc genhtml_function_coverage=1 00:20:32.420 --rc genhtml_legend=1 00:20:32.420 --rc geninfo_all_blocks=1 00:20:32.420 --rc geninfo_unexecuted_blocks=1 00:20:32.420 00:20:32.420 ' 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:32.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.420 --rc genhtml_branch_coverage=1 00:20:32.420 --rc genhtml_function_coverage=1 00:20:32.420 --rc genhtml_legend=1 00:20:32.420 --rc geninfo_all_blocks=1 00:20:32.420 --rc geninfo_unexecuted_blocks=1 00:20:32.420 00:20:32.420 ' 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:32.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.420 --rc genhtml_branch_coverage=1 00:20:32.420 --rc genhtml_function_coverage=1 00:20:32.420 --rc genhtml_legend=1 00:20:32.420 --rc geninfo_all_blocks=1 00:20:32.420 --rc geninfo_unexecuted_blocks=1 00:20:32.420 00:20:32.420 ' 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:32.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.420 --rc genhtml_branch_coverage=1 00:20:32.420 --rc genhtml_function_coverage=1 00:20:32.420 --rc genhtml_legend=1 00:20:32.420 --rc geninfo_all_blocks=1 00:20:32.420 --rc geninfo_unexecuted_blocks=1 00:20:32.420 00:20:32.420 ' 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.420 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:32.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:32.421 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:34.952 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:34.952 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:34.952 Found net devices under 0000:09:00.0: cvl_0_0 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:34.952 Found net devices under 0000:09:00.1: cvl_0_1 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.952 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:34.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:20:34.953 00:20:34.953 --- 10.0.0.2 ping statistics --- 00:20:34.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.953 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:20:34.953 00:20:34.953 --- 10.0.0.1 ping statistics --- 00:20:34.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.953 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3769589 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3769589 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3769589 ']' 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.953 [2024-11-20 09:54:11.573916] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:34.953 [2024-11-20 09:54:11.573994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.953 [2024-11-20 09:54:11.647373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.953 [2024-11-20 09:54:11.704626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.953 [2024-11-20 09:54:11.704696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.953 [2024-11-20 09:54:11.704709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.953 [2024-11-20 09:54:11.704720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.953 [2024-11-20 09:54:11.704729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.953 [2024-11-20 09:54:11.705271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.953 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.953 [2024-11-20 09:54:11.850458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.954 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.954 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:34.954 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.954 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:34.954 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.954 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:34.954 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.212 Malloc0 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:35.212 [2024-11-20 09:54:11.891101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3769620 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3769621 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3769622 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3769620 00:20:35.212 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.212 [2024-11-20 09:54:11.949584] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:35.212 [2024-11-20 09:54:11.959646] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:35.212 [2024-11-20 09:54:11.959890] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:36.146 Initializing NVMe Controllers 00:20:36.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:36.146 Initialization complete. Launching workers. 00:20:36.146 ======================================================== 00:20:36.146 Latency(us) 00:20:36.146 Device Information : IOPS MiB/s Average min max 00:20:36.146 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40930.73 40706.34 41914.38 00:20:36.146 ======================================================== 00:20:36.146 Total : 25.00 0.10 40930.73 40706.34 41914.38 00:20:36.146 00:20:36.403 Initializing NVMe Controllers 00:20:36.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:36.403 Initialization complete. Launching workers. 00:20:36.403 ======================================================== 00:20:36.403 Latency(us) 00:20:36.403 Device Information : IOPS MiB/s Average min max 00:20:36.403 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40875.48 40323.69 40965.94 00:20:36.403 ======================================================== 00:20:36.403 Total : 25.00 0.10 40875.48 40323.69 40965.94 00:20:36.403 00:20:36.403 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3769621 00:20:36.403 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3769622 00:20:36.403 Initializing NVMe Controllers 00:20:36.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:36.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:36.403 Initialization complete. Launching workers. 00:20:36.403 ======================================================== 00:20:36.403 Latency(us) 00:20:36.404 Device Information : IOPS MiB/s Average min max 00:20:36.404 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41047.85 40858.75 41935.00 00:20:36.404 ======================================================== 00:20:36.404 Total : 25.00 0.10 41047.85 40858.75 41935.00 00:20:36.404 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.404 rmmod nvme_tcp 00:20:36.404 rmmod nvme_fabrics 00:20:36.404 rmmod nvme_keyring 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3769589 ']' 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3769589 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3769589 ']' 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3769589 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3769589 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3769589' 00:20:36.404 killing process with pid 3769589 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3769589 00:20:36.404 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3769589 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.662 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.196 00:20:39.196 real 0m6.436s 00:20:39.196 user 0m5.812s 00:20:39.196 sys 0m2.536s 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:39.196 ************************************ 00:20:39.196 END TEST nvmf_control_msg_list 00:20:39.196 ************************************ 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.196 ************************************ 00:20:39.196 START TEST nvmf_wait_for_buf 00:20:39.196 ************************************ 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:39.196 * Looking for test storage... 00:20:39.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.196 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:39.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.197 --rc genhtml_branch_coverage=1 00:20:39.197 --rc genhtml_function_coverage=1 00:20:39.197 --rc genhtml_legend=1 00:20:39.197 --rc geninfo_all_blocks=1 00:20:39.197 --rc geninfo_unexecuted_blocks=1 00:20:39.197 00:20:39.197 ' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:39.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.197 --rc genhtml_branch_coverage=1 00:20:39.197 --rc genhtml_function_coverage=1 00:20:39.197 --rc genhtml_legend=1 00:20:39.197 --rc geninfo_all_blocks=1 00:20:39.197 --rc geninfo_unexecuted_blocks=1 00:20:39.197 00:20:39.197 ' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:39.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.197 --rc genhtml_branch_coverage=1 00:20:39.197 --rc genhtml_function_coverage=1 00:20:39.197 --rc genhtml_legend=1 00:20:39.197 --rc geninfo_all_blocks=1 00:20:39.197 --rc geninfo_unexecuted_blocks=1 00:20:39.197 00:20:39.197 ' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:39.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.197 --rc genhtml_branch_coverage=1 00:20:39.197 --rc genhtml_function_coverage=1 00:20:39.197 --rc genhtml_legend=1 00:20:39.197 --rc geninfo_all_blocks=1 00:20:39.197 --rc geninfo_unexecuted_blocks=1 00:20:39.197 00:20:39.197 ' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.197 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.198 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.198 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:41.147 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:41.148 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:41.148 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:41.148 Found net devices under 0000:09:00.0: cvl_0_0 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:41.148 Found net devices under 0000:09:00.1: cvl_0_1 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:41.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:20:41.148 00:20:41.148 --- 10.0.0.2 ping statistics --- 00:20:41.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.148 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:20:41.148 00:20:41.148 --- 10.0.0.1 ping statistics --- 00:20:41.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.148 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:41.148 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3771697 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3771697 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3771697 ']' 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.149 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.436 [2024-11-20 09:54:18.050342] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:41.436 [2024-11-20 09:54:18.050424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.436 [2024-11-20 09:54:18.125123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.436 [2024-11-20 09:54:18.184368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.436 [2024-11-20 09:54:18.184421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.436 [2024-11-20 09:54:18.184435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.436 [2024-11-20 09:54:18.184446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.436 [2024-11-20 09:54:18.184457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.436 [2024-11-20 09:54:18.185032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.436 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.694 Malloc0 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.694 [2024-11-20 09:54:18.429174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.694 [2024-11-20 09:54:18.453400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.694 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:41.694 [2024-11-20 09:54:18.537408] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:43.596 Initializing NVMe Controllers 00:20:43.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:43.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:43.596 Initialization complete. Launching workers. 00:20:43.596 ======================================================== 00:20:43.596 Latency(us) 00:20:43.596 Device Information : IOPS MiB/s Average min max 00:20:43.596 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.00 15.87 32800.93 7993.22 63869.03 00:20:43.596 ======================================================== 00:20:43.596 Total : 127.00 15.87 32800.93 7993.22 63869.03 00:20:43.596 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.596 rmmod nvme_tcp 00:20:43.596 rmmod nvme_fabrics 00:20:43.596 rmmod nvme_keyring 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3771697 ']' 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3771697 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3771697 ']' 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3771697 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3771697 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3771697' 00:20:43.596 killing process with pid 3771697 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3771697 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3771697 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.596 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.134 00:20:46.134 real 0m6.922s 00:20:46.134 user 0m3.299s 00:20:46.134 sys 0m2.094s 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:46.134 ************************************ 00:20:46.134 END TEST nvmf_wait_for_buf 00:20:46.134 ************************************ 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:46.134 09:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:48.035 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:48.035 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.035 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:48.036 Found net devices under 0000:09:00.0: cvl_0_0 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:48.036 Found net devices under 0000:09:00.1: cvl_0_1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.036 ************************************ 00:20:48.036 START TEST nvmf_perf_adq 00:20:48.036 ************************************ 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:48.036 * Looking for test storage... 00:20:48.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.036 --rc genhtml_branch_coverage=1 00:20:48.036 --rc genhtml_function_coverage=1 00:20:48.036 --rc genhtml_legend=1 00:20:48.036 --rc geninfo_all_blocks=1 00:20:48.036 --rc geninfo_unexecuted_blocks=1 00:20:48.036 00:20:48.036 ' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.036 --rc genhtml_branch_coverage=1 00:20:48.036 --rc genhtml_function_coverage=1 00:20:48.036 --rc genhtml_legend=1 00:20:48.036 --rc geninfo_all_blocks=1 00:20:48.036 --rc geninfo_unexecuted_blocks=1 00:20:48.036 00:20:48.036 ' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.036 --rc genhtml_branch_coverage=1 00:20:48.036 --rc genhtml_function_coverage=1 00:20:48.036 --rc genhtml_legend=1 00:20:48.036 --rc geninfo_all_blocks=1 00:20:48.036 --rc geninfo_unexecuted_blocks=1 00:20:48.036 00:20:48.036 ' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.036 --rc genhtml_branch_coverage=1 00:20:48.036 --rc genhtml_function_coverage=1 00:20:48.036 --rc genhtml_legend=1 00:20:48.036 --rc geninfo_all_blocks=1 00:20:48.036 --rc geninfo_unexecuted_blocks=1 00:20:48.036 00:20:48.036 ' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.036 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.037 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:50.570 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:50.570 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.570 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:50.571 Found net devices under 0000:09:00.0: cvl_0_0 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:50.571 Found net devices under 0000:09:00.1: cvl_0_1 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:50.571 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:50.829 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:52.730 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:58.007 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:58.007 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.007 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:58.007 Found net devices under 0000:09:00.0: cvl_0_0 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:58.008 Found net devices under 0000:09:00.1: cvl_0_1 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:58.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:20:58.008 00:20:58.008 --- 10.0.0.2 ping statistics --- 00:20:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.008 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:58.008 00:20:58.008 --- 10.0.0.1 ping statistics --- 00:20:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.008 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3776539 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3776539 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3776539 ']' 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.008 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.008 [2024-11-20 09:54:34.843819] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:20:58.008 [2024-11-20 09:54:34.843906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.008 [2024-11-20 09:54:34.916159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.267 [2024-11-20 09:54:34.976640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.267 [2024-11-20 09:54:34.976697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.267 [2024-11-20 09:54:34.976720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.267 [2024-11-20 09:54:34.976730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.267 [2024-11-20 09:54:34.976740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.267 [2024-11-20 09:54:34.978255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.267 [2024-11-20 09:54:34.978349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.267 [2024-11-20 09:54:34.978376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.267 [2024-11-20 09:54:34.978379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.267 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.525 [2024-11-20 09:54:35.247814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.525 Malloc1 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.525 [2024-11-20 09:54:35.308949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3776571 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:58.525 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:00.425 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:00.425 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.425 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.425 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.425 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:00.425 "tick_rate": 2700000000, 00:21:00.425 "poll_groups": [ 00:21:00.425 { 00:21:00.425 "name": "nvmf_tgt_poll_group_000", 00:21:00.425 "admin_qpairs": 1, 00:21:00.425 "io_qpairs": 1, 00:21:00.425 "current_admin_qpairs": 1, 00:21:00.425 "current_io_qpairs": 1, 00:21:00.425 "pending_bdev_io": 0, 00:21:00.425 "completed_nvme_io": 19303, 00:21:00.425 "transports": [ 00:21:00.425 { 00:21:00.425 "trtype": "TCP" 00:21:00.425 } 00:21:00.425 ] 00:21:00.425 }, 00:21:00.425 { 00:21:00.425 "name": "nvmf_tgt_poll_group_001", 00:21:00.425 "admin_qpairs": 0, 00:21:00.425 "io_qpairs": 1, 00:21:00.425 "current_admin_qpairs": 0, 00:21:00.425 "current_io_qpairs": 1, 00:21:00.425 "pending_bdev_io": 0, 00:21:00.425 "completed_nvme_io": 19388, 00:21:00.425 "transports": [ 00:21:00.425 { 00:21:00.425 "trtype": "TCP" 00:21:00.425 } 00:21:00.425 ] 00:21:00.425 }, 00:21:00.425 { 00:21:00.425 "name": "nvmf_tgt_poll_group_002", 00:21:00.425 "admin_qpairs": 0, 00:21:00.425 "io_qpairs": 1, 00:21:00.425 "current_admin_qpairs": 0, 00:21:00.425 "current_io_qpairs": 1, 00:21:00.425 "pending_bdev_io": 0, 00:21:00.425 "completed_nvme_io": 19590, 00:21:00.425 "transports": [ 00:21:00.425 { 00:21:00.425 "trtype": "TCP" 00:21:00.425 } 00:21:00.425 ] 00:21:00.425 }, 00:21:00.425 { 00:21:00.425 "name": "nvmf_tgt_poll_group_003", 00:21:00.425 "admin_qpairs": 0, 00:21:00.425 "io_qpairs": 1, 00:21:00.425 "current_admin_qpairs": 0, 00:21:00.425 "current_io_qpairs": 1, 00:21:00.425 "pending_bdev_io": 0, 00:21:00.425 "completed_nvme_io": 19303, 00:21:00.425 "transports": [ 00:21:00.425 { 00:21:00.425 "trtype": "TCP" 00:21:00.425 } 00:21:00.425 ] 00:21:00.425 } 00:21:00.425 ] 00:21:00.425 }' 00:21:00.689 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:00.689 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:00.689 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:00.689 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:00.689 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3776571 00:21:08.802 Initializing NVMe Controllers 00:21:08.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:08.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:08.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:08.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:08.802 Initialization complete. Launching workers. 00:21:08.802 ======================================================== 00:21:08.802 Latency(us) 00:21:08.802 Device Information : IOPS MiB/s Average min max 00:21:08.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10344.70 40.41 6186.37 2254.38 10029.43 00:21:08.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10459.80 40.86 6119.66 2343.20 10621.25 00:21:08.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10505.50 41.04 6093.87 2238.54 10395.54 00:21:08.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10404.10 40.64 6151.75 2491.54 10105.42 00:21:08.802 ======================================================== 00:21:08.802 Total : 41714.10 162.95 6137.71 2238.54 10621.25 00:21:08.802 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.802 rmmod nvme_tcp 00:21:08.802 rmmod nvme_fabrics 00:21:08.802 rmmod nvme_keyring 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3776539 ']' 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3776539 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3776539 ']' 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3776539 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3776539 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3776539' 00:21:08.802 killing process with pid 3776539 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3776539 00:21:08.802 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3776539 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.595 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:11.595 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:11.595 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:11.595 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:11.854 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:13.756 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:19.028 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:19.029 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:19.029 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:19.029 Found net devices under 0000:09:00.0: cvl_0_0 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:19.029 Found net devices under 0000:09:00.1: cvl_0_1 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.029 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:21:19.030 00:21:19.030 --- 10.0.0.2 ping statistics --- 00:21:19.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.030 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:21:19.030 00:21:19.030 --- 10.0.0.1 ping statistics --- 00:21:19.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.030 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:19.030 net.core.busy_poll = 1 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:19.030 net.core.busy_read = 1 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:19.030 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3779205 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3779205 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3779205 ']' 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.287 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.287 [2024-11-20 09:54:56.040539] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:21:19.287 [2024-11-20 09:54:56.040643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.287 [2024-11-20 09:54:56.120989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.287 [2024-11-20 09:54:56.181864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.287 [2024-11-20 09:54:56.181923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.287 [2024-11-20 09:54:56.181937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.288 [2024-11-20 09:54:56.181948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.288 [2024-11-20 09:54:56.181958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.288 [2024-11-20 09:54:56.185339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.288 [2024-11-20 09:54:56.185367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.288 [2024-11-20 09:54:56.185427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.288 [2024-11-20 09:54:56.185430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.547 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.547 [2024-11-20 09:54:56.458715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.805 Malloc1 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.805 [2024-11-20 09:54:56.521146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3779354 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:19.805 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:21.705 "tick_rate": 2700000000, 00:21:21.705 "poll_groups": [ 00:21:21.705 { 00:21:21.705 "name": "nvmf_tgt_poll_group_000", 00:21:21.705 "admin_qpairs": 1, 00:21:21.705 "io_qpairs": 4, 00:21:21.705 "current_admin_qpairs": 1, 00:21:21.705 "current_io_qpairs": 4, 00:21:21.705 "pending_bdev_io": 0, 00:21:21.705 "completed_nvme_io": 34026, 00:21:21.705 "transports": [ 00:21:21.705 { 00:21:21.705 "trtype": "TCP" 00:21:21.705 } 00:21:21.705 ] 00:21:21.705 }, 00:21:21.705 { 00:21:21.705 "name": "nvmf_tgt_poll_group_001", 00:21:21.705 "admin_qpairs": 0, 00:21:21.705 "io_qpairs": 0, 00:21:21.705 "current_admin_qpairs": 0, 00:21:21.705 "current_io_qpairs": 0, 00:21:21.705 "pending_bdev_io": 0, 00:21:21.705 "completed_nvme_io": 0, 00:21:21.705 "transports": [ 00:21:21.705 { 00:21:21.705 "trtype": "TCP" 00:21:21.705 } 00:21:21.705 ] 00:21:21.705 }, 00:21:21.705 { 00:21:21.705 "name": "nvmf_tgt_poll_group_002", 00:21:21.705 "admin_qpairs": 0, 00:21:21.705 "io_qpairs": 0, 00:21:21.705 "current_admin_qpairs": 0, 00:21:21.705 "current_io_qpairs": 0, 00:21:21.705 "pending_bdev_io": 0, 00:21:21.705 "completed_nvme_io": 0, 00:21:21.705 "transports": [ 00:21:21.705 { 00:21:21.705 "trtype": "TCP" 00:21:21.705 } 00:21:21.705 ] 00:21:21.705 }, 00:21:21.705 { 00:21:21.705 "name": "nvmf_tgt_poll_group_003", 00:21:21.705 "admin_qpairs": 0, 00:21:21.705 "io_qpairs": 0, 00:21:21.705 "current_admin_qpairs": 0, 00:21:21.705 "current_io_qpairs": 0, 00:21:21.705 "pending_bdev_io": 0, 00:21:21.705 "completed_nvme_io": 0, 00:21:21.705 "transports": [ 00:21:21.705 { 00:21:21.705 "trtype": "TCP" 00:21:21.705 } 00:21:21.705 ] 00:21:21.705 } 00:21:21.705 ] 00:21:21.705 }' 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:21.705 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3779354 00:21:29.812 Initializing NVMe Controllers 00:21:29.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:29.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:29.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:29.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:29.812 Initialization complete. Launching workers. 00:21:29.812 ======================================================== 00:21:29.812 Latency(us) 00:21:29.812 Device Information : IOPS MiB/s Average min max 00:21:29.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4509.70 17.62 14194.83 1844.70 60810.92 00:21:29.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5221.90 20.40 12256.89 1814.99 60231.80 00:21:29.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3607.30 14.09 17744.64 1875.79 61430.55 00:21:29.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4491.70 17.55 14248.28 2379.67 61879.17 00:21:29.812 ======================================================== 00:21:29.812 Total : 17830.60 69.65 14358.90 1814.99 61879.17 00:21:29.812 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:29.812 rmmod nvme_tcp 00:21:29.812 rmmod nvme_fabrics 00:21:29.812 rmmod nvme_keyring 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3779205 ']' 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3779205 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3779205 ']' 00:21:29.812 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3779205 00:21:30.070 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:30.070 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.070 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3779205 00:21:30.070 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.070 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.070 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3779205' 00:21:30.070 killing process with pid 3779205 00:21:30.070 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3779205 00:21:30.070 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3779205 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.330 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:32.241 00:21:32.241 real 0m44.420s 00:21:32.241 user 2m40.781s 00:21:32.241 sys 0m9.002s 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.241 ************************************ 00:21:32.241 END TEST nvmf_perf_adq 00:21:32.241 ************************************ 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:32.241 ************************************ 00:21:32.241 START TEST nvmf_shutdown 00:21:32.241 ************************************ 00:21:32.241 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:32.530 * Looking for test storage... 00:21:32.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:32.530 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:32.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.531 --rc genhtml_branch_coverage=1 00:21:32.531 --rc genhtml_function_coverage=1 00:21:32.531 --rc genhtml_legend=1 00:21:32.531 --rc geninfo_all_blocks=1 00:21:32.531 --rc geninfo_unexecuted_blocks=1 00:21:32.531 00:21:32.531 ' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:32.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.531 --rc genhtml_branch_coverage=1 00:21:32.531 --rc genhtml_function_coverage=1 00:21:32.531 --rc genhtml_legend=1 00:21:32.531 --rc geninfo_all_blocks=1 00:21:32.531 --rc geninfo_unexecuted_blocks=1 00:21:32.531 00:21:32.531 ' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:32.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.531 --rc genhtml_branch_coverage=1 00:21:32.531 --rc genhtml_function_coverage=1 00:21:32.531 --rc genhtml_legend=1 00:21:32.531 --rc geninfo_all_blocks=1 00:21:32.531 --rc geninfo_unexecuted_blocks=1 00:21:32.531 00:21:32.531 ' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:32.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.531 --rc genhtml_branch_coverage=1 00:21:32.531 --rc genhtml_function_coverage=1 00:21:32.531 --rc genhtml_legend=1 00:21:32.531 --rc geninfo_all_blocks=1 00:21:32.531 --rc geninfo_unexecuted_blocks=1 00:21:32.531 00:21:32.531 ' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:32.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:32.531 ************************************ 00:21:32.531 START TEST nvmf_shutdown_tc1 00:21:32.531 ************************************ 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:32.531 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:35.099 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:35.099 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:35.099 Found net devices under 0000:09:00.0: cvl_0_0 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.099 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:35.100 Found net devices under 0000:09:00.1: cvl_0_1 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:21:35.100 00:21:35.100 --- 10.0.0.2 ping statistics --- 00:21:35.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.100 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:21:35.100 00:21:35.100 --- 10.0.0.1 ping statistics --- 00:21:35.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.100 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3782528 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3782528 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3782528 ']' 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.100 [2024-11-20 09:55:11.601885] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:21:35.100 [2024-11-20 09:55:11.601966] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.100 [2024-11-20 09:55:11.675948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.100 [2024-11-20 09:55:11.736919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.100 [2024-11-20 09:55:11.736974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.100 [2024-11-20 09:55:11.736988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.100 [2024-11-20 09:55:11.736999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.100 [2024-11-20 09:55:11.737009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.100 [2024-11-20 09:55:11.738617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.100 [2024-11-20 09:55:11.738663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.100 [2024-11-20 09:55:11.738727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:35.100 [2024-11-20 09:55:11.738730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.100 [2024-11-20 09:55:11.898766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.100 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.101 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.101 Malloc1 00:21:35.101 [2024-11-20 09:55:12.002086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.359 Malloc2 00:21:35.359 Malloc3 00:21:35.359 Malloc4 00:21:35.359 Malloc5 00:21:35.359 Malloc6 00:21:35.359 Malloc7 00:21:35.618 Malloc8 00:21:35.618 Malloc9 00:21:35.618 Malloc10 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3782707 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3782707 /var/tmp/bdevperf.sock 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3782707 ']' 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.618 { 00:21:35.618 "params": { 00:21:35.618 "name": "Nvme$subsystem", 00:21:35.618 "trtype": "$TEST_TRANSPORT", 00:21:35.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.618 "adrfam": "ipv4", 00:21:35.618 "trsvcid": "$NVMF_PORT", 00:21:35.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.618 "hdgst": ${hdgst:-false}, 00:21:35.618 "ddgst": ${ddgst:-false} 00:21:35.618 }, 00:21:35.618 "method": "bdev_nvme_attach_controller" 00:21:35.618 } 00:21:35.618 EOF 00:21:35.618 )") 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.618 { 00:21:35.618 "params": { 00:21:35.618 "name": "Nvme$subsystem", 00:21:35.618 "trtype": "$TEST_TRANSPORT", 00:21:35.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.618 "adrfam": "ipv4", 00:21:35.618 "trsvcid": "$NVMF_PORT", 00:21:35.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.618 "hdgst": ${hdgst:-false}, 00:21:35.618 "ddgst": ${ddgst:-false} 00:21:35.618 }, 00:21:35.618 "method": "bdev_nvme_attach_controller" 00:21:35.618 } 00:21:35.618 EOF 00:21:35.618 )") 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.618 { 00:21:35.618 "params": { 00:21:35.618 "name": "Nvme$subsystem", 00:21:35.618 "trtype": "$TEST_TRANSPORT", 00:21:35.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.618 "adrfam": "ipv4", 00:21:35.618 "trsvcid": "$NVMF_PORT", 00:21:35.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.618 "hdgst": ${hdgst:-false}, 00:21:35.618 "ddgst": ${ddgst:-false} 00:21:35.618 }, 00:21:35.618 "method": "bdev_nvme_attach_controller" 00:21:35.618 } 00:21:35.618 EOF 00:21:35.618 )") 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.618 { 00:21:35.618 "params": { 00:21:35.618 "name": "Nvme$subsystem", 00:21:35.618 "trtype": "$TEST_TRANSPORT", 00:21:35.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.618 "adrfam": "ipv4", 00:21:35.618 "trsvcid": "$NVMF_PORT", 00:21:35.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.618 "hdgst": ${hdgst:-false}, 00:21:35.618 "ddgst": ${ddgst:-false} 00:21:35.618 }, 00:21:35.618 "method": "bdev_nvme_attach_controller" 00:21:35.618 } 00:21:35.618 EOF 00:21:35.618 )") 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.618 { 00:21:35.618 "params": { 00:21:35.618 "name": "Nvme$subsystem", 00:21:35.618 "trtype": "$TEST_TRANSPORT", 00:21:35.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.618 "adrfam": "ipv4", 00:21:35.618 "trsvcid": "$NVMF_PORT", 00:21:35.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.618 "hdgst": ${hdgst:-false}, 00:21:35.618 "ddgst": ${ddgst:-false} 00:21:35.618 }, 00:21:35.618 "method": "bdev_nvme_attach_controller" 00:21:35.618 } 00:21:35.618 EOF 00:21:35.618 )") 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.618 { 00:21:35.618 "params": { 00:21:35.618 "name": "Nvme$subsystem", 00:21:35.618 "trtype": "$TEST_TRANSPORT", 00:21:35.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.618 "adrfam": "ipv4", 00:21:35.618 "trsvcid": "$NVMF_PORT", 00:21:35.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.618 "hdgst": ${hdgst:-false}, 00:21:35.618 "ddgst": ${ddgst:-false} 00:21:35.618 }, 00:21:35.618 "method": "bdev_nvme_attach_controller" 00:21:35.618 } 00:21:35.618 EOF 00:21:35.618 )") 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.618 { 00:21:35.618 "params": { 00:21:35.618 "name": "Nvme$subsystem", 00:21:35.618 "trtype": "$TEST_TRANSPORT", 00:21:35.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.618 "adrfam": "ipv4", 00:21:35.618 "trsvcid": "$NVMF_PORT", 00:21:35.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.618 "hdgst": ${hdgst:-false}, 00:21:35.618 "ddgst": ${ddgst:-false} 00:21:35.618 }, 00:21:35.618 "method": "bdev_nvme_attach_controller" 00:21:35.618 } 00:21:35.618 EOF 00:21:35.618 )") 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.618 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.618 { 00:21:35.618 "params": { 00:21:35.618 "name": "Nvme$subsystem", 00:21:35.618 "trtype": "$TEST_TRANSPORT", 00:21:35.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.618 "adrfam": "ipv4", 00:21:35.618 "trsvcid": "$NVMF_PORT", 00:21:35.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.618 "hdgst": ${hdgst:-false}, 00:21:35.618 "ddgst": ${ddgst:-false} 00:21:35.618 }, 00:21:35.618 "method": "bdev_nvme_attach_controller" 00:21:35.618 } 00:21:35.618 EOF 00:21:35.618 )") 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.619 { 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme$subsystem", 00:21:35.619 "trtype": "$TEST_TRANSPORT", 00:21:35.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "$NVMF_PORT", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.619 "hdgst": ${hdgst:-false}, 00:21:35.619 "ddgst": ${ddgst:-false} 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 } 00:21:35.619 EOF 00:21:35.619 )") 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.619 { 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme$subsystem", 00:21:35.619 "trtype": "$TEST_TRANSPORT", 00:21:35.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "$NVMF_PORT", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.619 "hdgst": ${hdgst:-false}, 00:21:35.619 "ddgst": ${ddgst:-false} 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 } 00:21:35.619 EOF 00:21:35.619 )") 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:35.619 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme1", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme2", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme3", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme4", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme5", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme6", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme7", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme8", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme9", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 },{ 00:21:35.619 "params": { 00:21:35.619 "name": "Nvme10", 00:21:35.619 "trtype": "tcp", 00:21:35.619 "traddr": "10.0.0.2", 00:21:35.619 "adrfam": "ipv4", 00:21:35.619 "trsvcid": "4420", 00:21:35.619 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:35.619 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:35.619 "hdgst": false, 00:21:35.619 "ddgst": false 00:21:35.619 }, 00:21:35.619 "method": "bdev_nvme_attach_controller" 00:21:35.619 }' 00:21:35.619 [2024-11-20 09:55:12.503718] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:21:35.619 [2024-11-20 09:55:12.503797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:35.877 [2024-11-20 09:55:12.577098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.877 [2024-11-20 09:55:12.637857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3782707 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:37.776 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:38.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3782707 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3782528 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.710 { 00:21:38.710 "params": { 00:21:38.710 "name": "Nvme$subsystem", 00:21:38.710 "trtype": "$TEST_TRANSPORT", 00:21:38.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.710 "adrfam": "ipv4", 00:21:38.710 "trsvcid": "$NVMF_PORT", 00:21:38.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.710 "hdgst": ${hdgst:-false}, 00:21:38.710 "ddgst": ${ddgst:-false} 00:21:38.710 }, 00:21:38.710 "method": "bdev_nvme_attach_controller" 00:21:38.710 } 00:21:38.710 EOF 00:21:38.710 )") 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.710 { 00:21:38.710 "params": { 00:21:38.710 "name": "Nvme$subsystem", 00:21:38.710 "trtype": "$TEST_TRANSPORT", 00:21:38.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.710 "adrfam": "ipv4", 00:21:38.710 "trsvcid": "$NVMF_PORT", 00:21:38.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.710 "hdgst": ${hdgst:-false}, 00:21:38.710 "ddgst": ${ddgst:-false} 00:21:38.710 }, 00:21:38.710 "method": "bdev_nvme_attach_controller" 00:21:38.710 } 00:21:38.710 EOF 00:21:38.710 )") 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.710 { 00:21:38.710 "params": { 00:21:38.710 "name": "Nvme$subsystem", 00:21:38.710 "trtype": "$TEST_TRANSPORT", 00:21:38.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.710 "adrfam": "ipv4", 00:21:38.710 "trsvcid": "$NVMF_PORT", 00:21:38.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.710 "hdgst": ${hdgst:-false}, 00:21:38.710 "ddgst": ${ddgst:-false} 00:21:38.710 }, 00:21:38.710 "method": "bdev_nvme_attach_controller" 00:21:38.710 } 00:21:38.710 EOF 00:21:38.710 )") 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.710 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.710 { 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme$subsystem", 00:21:38.711 "trtype": "$TEST_TRANSPORT", 00:21:38.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "$NVMF_PORT", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.711 "hdgst": ${hdgst:-false}, 00:21:38.711 "ddgst": ${ddgst:-false} 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 } 00:21:38.711 EOF 00:21:38.711 )") 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.711 { 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme$subsystem", 00:21:38.711 "trtype": "$TEST_TRANSPORT", 00:21:38.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "$NVMF_PORT", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.711 "hdgst": ${hdgst:-false}, 00:21:38.711 "ddgst": ${ddgst:-false} 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 } 00:21:38.711 EOF 00:21:38.711 )") 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.711 { 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme$subsystem", 00:21:38.711 "trtype": "$TEST_TRANSPORT", 00:21:38.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "$NVMF_PORT", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.711 "hdgst": ${hdgst:-false}, 00:21:38.711 "ddgst": ${ddgst:-false} 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 } 00:21:38.711 EOF 00:21:38.711 )") 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.711 { 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme$subsystem", 00:21:38.711 "trtype": "$TEST_TRANSPORT", 00:21:38.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "$NVMF_PORT", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.711 "hdgst": ${hdgst:-false}, 00:21:38.711 "ddgst": ${ddgst:-false} 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 } 00:21:38.711 EOF 00:21:38.711 )") 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.711 { 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme$subsystem", 00:21:38.711 "trtype": "$TEST_TRANSPORT", 00:21:38.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "$NVMF_PORT", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.711 "hdgst": ${hdgst:-false}, 00:21:38.711 "ddgst": ${ddgst:-false} 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 } 00:21:38.711 EOF 00:21:38.711 )") 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.711 { 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme$subsystem", 00:21:38.711 "trtype": "$TEST_TRANSPORT", 00:21:38.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "$NVMF_PORT", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.711 "hdgst": ${hdgst:-false}, 00:21:38.711 "ddgst": ${ddgst:-false} 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 } 00:21:38.711 EOF 00:21:38.711 )") 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:38.711 { 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme$subsystem", 00:21:38.711 "trtype": "$TEST_TRANSPORT", 00:21:38.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "$NVMF_PORT", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:38.711 "hdgst": ${hdgst:-false}, 00:21:38.711 "ddgst": ${ddgst:-false} 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 } 00:21:38.711 EOF 00:21:38.711 )") 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:38.711 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme1", 00:21:38.711 "trtype": "tcp", 00:21:38.711 "traddr": "10.0.0.2", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "4420", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.711 "hdgst": false, 00:21:38.711 "ddgst": false 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 },{ 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme2", 00:21:38.711 "trtype": "tcp", 00:21:38.711 "traddr": "10.0.0.2", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "4420", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:38.711 "hdgst": false, 00:21:38.711 "ddgst": false 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 },{ 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme3", 00:21:38.711 "trtype": "tcp", 00:21:38.711 "traddr": "10.0.0.2", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "4420", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:38.711 "hdgst": false, 00:21:38.711 "ddgst": false 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 },{ 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme4", 00:21:38.711 "trtype": "tcp", 00:21:38.711 "traddr": "10.0.0.2", 00:21:38.711 "adrfam": "ipv4", 00:21:38.711 "trsvcid": "4420", 00:21:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:38.711 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:38.711 "hdgst": false, 00:21:38.711 "ddgst": false 00:21:38.711 }, 00:21:38.711 "method": "bdev_nvme_attach_controller" 00:21:38.711 },{ 00:21:38.711 "params": { 00:21:38.711 "name": "Nvme5", 00:21:38.712 "trtype": "tcp", 00:21:38.712 "traddr": "10.0.0.2", 00:21:38.712 "adrfam": "ipv4", 00:21:38.712 "trsvcid": "4420", 00:21:38.712 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:38.712 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:38.712 "hdgst": false, 00:21:38.712 "ddgst": false 00:21:38.712 }, 00:21:38.712 "method": "bdev_nvme_attach_controller" 00:21:38.712 },{ 00:21:38.712 "params": { 00:21:38.712 "name": "Nvme6", 00:21:38.712 "trtype": "tcp", 00:21:38.712 "traddr": "10.0.0.2", 00:21:38.712 "adrfam": "ipv4", 00:21:38.712 "trsvcid": "4420", 00:21:38.712 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:38.712 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:38.712 "hdgst": false, 00:21:38.712 "ddgst": false 00:21:38.712 }, 00:21:38.712 "method": "bdev_nvme_attach_controller" 00:21:38.712 },{ 00:21:38.712 "params": { 00:21:38.712 "name": "Nvme7", 00:21:38.712 "trtype": "tcp", 00:21:38.712 "traddr": "10.0.0.2", 00:21:38.712 "adrfam": "ipv4", 00:21:38.712 "trsvcid": "4420", 00:21:38.712 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:38.712 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:38.712 "hdgst": false, 00:21:38.712 "ddgst": false 00:21:38.712 }, 00:21:38.712 "method": "bdev_nvme_attach_controller" 00:21:38.712 },{ 00:21:38.712 "params": { 00:21:38.712 "name": "Nvme8", 00:21:38.712 "trtype": "tcp", 00:21:38.712 "traddr": "10.0.0.2", 00:21:38.712 "adrfam": "ipv4", 00:21:38.712 "trsvcid": "4420", 00:21:38.712 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:38.712 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:38.712 "hdgst": false, 00:21:38.712 "ddgst": false 00:21:38.712 }, 00:21:38.712 "method": "bdev_nvme_attach_controller" 00:21:38.712 },{ 00:21:38.712 "params": { 00:21:38.712 "name": "Nvme9", 00:21:38.712 "trtype": "tcp", 00:21:38.712 "traddr": "10.0.0.2", 00:21:38.712 "adrfam": "ipv4", 00:21:38.712 "trsvcid": "4420", 00:21:38.712 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:38.712 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:38.712 "hdgst": false, 00:21:38.712 "ddgst": false 00:21:38.712 }, 00:21:38.712 "method": "bdev_nvme_attach_controller" 00:21:38.712 },{ 00:21:38.712 "params": { 00:21:38.712 "name": "Nvme10", 00:21:38.712 "trtype": "tcp", 00:21:38.712 "traddr": "10.0.0.2", 00:21:38.712 "adrfam": "ipv4", 00:21:38.712 "trsvcid": "4420", 00:21:38.712 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:38.712 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:38.712 "hdgst": false, 00:21:38.712 "ddgst": false 00:21:38.712 }, 00:21:38.712 "method": "bdev_nvme_attach_controller" 00:21:38.712 }' 00:21:38.712 [2024-11-20 09:55:15.589822] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:21:38.712 [2024-11-20 09:55:15.589915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783122 ] 00:21:38.970 [2024-11-20 09:55:15.661823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.970 [2024-11-20 09:55:15.724181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.343 Running I/O for 1 seconds... 00:21:41.716 1809.00 IOPS, 113.06 MiB/s 00:21:41.716 Latency(us) 00:21:41.716 [2024-11-20T08:55:18.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.716 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme1n1 : 1.10 237.95 14.87 0.00 0.00 265584.11 7039.05 231463.44 00:21:41.716 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme2n1 : 1.09 243.08 15.19 0.00 0.00 252787.24 11942.12 240784.12 00:21:41.716 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme3n1 : 1.09 239.25 14.95 0.00 0.00 253722.53 6310.87 237677.23 00:21:41.716 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme4n1 : 1.08 236.02 14.75 0.00 0.00 254580.81 17185.00 253211.69 00:21:41.716 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme5n1 : 1.13 225.73 14.11 0.00 0.00 262528.19 34758.35 237677.23 00:21:41.716 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme6n1 : 1.12 227.73 14.23 0.00 0.00 255525.36 22622.06 260978.92 00:21:41.716 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme7n1 : 1.17 272.43 17.03 0.00 0.00 210426.77 7961.41 253211.69 00:21:41.716 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme8n1 : 1.14 225.16 14.07 0.00 0.00 249938.11 18350.08 256318.58 00:21:41.716 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme9n1 : 1.16 220.12 13.76 0.00 0.00 252106.15 20583.16 285834.05 00:21:41.716 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.716 Verification LBA range: start 0x0 length 0x400 00:21:41.716 Nvme10n1 : 1.18 270.51 16.91 0.00 0.00 202125.43 5485.61 267192.70 00:21:41.716 [2024-11-20T08:55:18.630Z] =================================================================================================================== 00:21:41.716 [2024-11-20T08:55:18.630Z] Total : 2397.97 149.87 0.00 0.00 244127.26 5485.61 285834.05 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.974 rmmod nvme_tcp 00:21:41.974 rmmod nvme_fabrics 00:21:41.974 rmmod nvme_keyring 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3782528 ']' 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3782528 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3782528 ']' 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3782528 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3782528 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3782528' 00:21:41.974 killing process with pid 3782528 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3782528 00:21:41.974 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3782528 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.541 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:44.447 00:21:44.447 real 0m11.987s 00:21:44.447 user 0m34.938s 00:21:44.447 sys 0m3.287s 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.447 ************************************ 00:21:44.447 END TEST nvmf_shutdown_tc1 00:21:44.447 ************************************ 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:44.447 ************************************ 00:21:44.447 START TEST nvmf_shutdown_tc2 00:21:44.447 ************************************ 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.447 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:44.448 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.448 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:44.448 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:44.707 Found net devices under 0000:09:00.0: cvl_0_0 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:44.707 Found net devices under 0000:09:00.1: cvl_0_1 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:44.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:21:44.707 00:21:44.707 --- 10.0.0.2 ping statistics --- 00:21:44.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.707 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:21:44.707 00:21:44.707 --- 10.0.0.1 ping statistics --- 00:21:44.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.707 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.707 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3783884 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3783884 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3783884 ']' 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.708 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.708 [2024-11-20 09:55:21.575344] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:21:44.708 [2024-11-20 09:55:21.575421] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.966 [2024-11-20 09:55:21.648094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.966 [2024-11-20 09:55:21.705409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.966 [2024-11-20 09:55:21.705462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.966 [2024-11-20 09:55:21.705482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.966 [2024-11-20 09:55:21.705494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.966 [2024-11-20 09:55:21.705504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.966 [2024-11-20 09:55:21.706936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.966 [2024-11-20 09:55:21.706998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.966 [2024-11-20 09:55:21.707067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:44.966 [2024-11-20 09:55:21.707070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 [2024-11-20 09:55:21.857907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:44.966 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.224 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.224 Malloc1 00:21:45.224 [2024-11-20 09:55:21.972003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.224 Malloc2 00:21:45.224 Malloc3 00:21:45.224 Malloc4 00:21:45.482 Malloc5 00:21:45.482 Malloc6 00:21:45.482 Malloc7 00:21:45.482 Malloc8 00:21:45.482 Malloc9 00:21:45.741 Malloc10 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3784064 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3784064 /var/tmp/bdevperf.sock 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3784064 ']' 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.741 { 00:21:45.741 "params": { 00:21:45.741 "name": "Nvme$subsystem", 00:21:45.741 "trtype": "$TEST_TRANSPORT", 00:21:45.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.741 "adrfam": "ipv4", 00:21:45.741 "trsvcid": "$NVMF_PORT", 00:21:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.741 "hdgst": ${hdgst:-false}, 00:21:45.741 "ddgst": ${ddgst:-false} 00:21:45.741 }, 00:21:45.741 "method": "bdev_nvme_attach_controller" 00:21:45.741 } 00:21:45.741 EOF 00:21:45.741 )") 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.741 { 00:21:45.741 "params": { 00:21:45.741 "name": "Nvme$subsystem", 00:21:45.741 "trtype": "$TEST_TRANSPORT", 00:21:45.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.741 "adrfam": "ipv4", 00:21:45.741 "trsvcid": "$NVMF_PORT", 00:21:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.741 "hdgst": ${hdgst:-false}, 00:21:45.741 "ddgst": ${ddgst:-false} 00:21:45.741 }, 00:21:45.741 "method": "bdev_nvme_attach_controller" 00:21:45.741 } 00:21:45.741 EOF 00:21:45.741 )") 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.741 { 00:21:45.741 "params": { 00:21:45.741 "name": "Nvme$subsystem", 00:21:45.741 "trtype": "$TEST_TRANSPORT", 00:21:45.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.741 "adrfam": "ipv4", 00:21:45.741 "trsvcid": "$NVMF_PORT", 00:21:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.741 "hdgst": ${hdgst:-false}, 00:21:45.741 "ddgst": ${ddgst:-false} 00:21:45.741 }, 00:21:45.741 "method": "bdev_nvme_attach_controller" 00:21:45.741 } 00:21:45.741 EOF 00:21:45.741 )") 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.741 { 00:21:45.741 "params": { 00:21:45.741 "name": "Nvme$subsystem", 00:21:45.741 "trtype": "$TEST_TRANSPORT", 00:21:45.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.741 "adrfam": "ipv4", 00:21:45.741 "trsvcid": "$NVMF_PORT", 00:21:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.741 "hdgst": ${hdgst:-false}, 00:21:45.741 "ddgst": ${ddgst:-false} 00:21:45.741 }, 00:21:45.741 "method": "bdev_nvme_attach_controller" 00:21:45.741 } 00:21:45.741 EOF 00:21:45.741 )") 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.741 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.741 { 00:21:45.741 "params": { 00:21:45.741 "name": "Nvme$subsystem", 00:21:45.741 "trtype": "$TEST_TRANSPORT", 00:21:45.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "$NVMF_PORT", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.742 "hdgst": ${hdgst:-false}, 00:21:45.742 "ddgst": ${ddgst:-false} 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 } 00:21:45.742 EOF 00:21:45.742 )") 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.742 { 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme$subsystem", 00:21:45.742 "trtype": "$TEST_TRANSPORT", 00:21:45.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "$NVMF_PORT", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.742 "hdgst": ${hdgst:-false}, 00:21:45.742 "ddgst": ${ddgst:-false} 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 } 00:21:45.742 EOF 00:21:45.742 )") 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.742 { 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme$subsystem", 00:21:45.742 "trtype": "$TEST_TRANSPORT", 00:21:45.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "$NVMF_PORT", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.742 "hdgst": ${hdgst:-false}, 00:21:45.742 "ddgst": ${ddgst:-false} 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 } 00:21:45.742 EOF 00:21:45.742 )") 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.742 { 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme$subsystem", 00:21:45.742 "trtype": "$TEST_TRANSPORT", 00:21:45.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "$NVMF_PORT", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.742 "hdgst": ${hdgst:-false}, 00:21:45.742 "ddgst": ${ddgst:-false} 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 } 00:21:45.742 EOF 00:21:45.742 )") 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.742 { 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme$subsystem", 00:21:45.742 "trtype": "$TEST_TRANSPORT", 00:21:45.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "$NVMF_PORT", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.742 "hdgst": ${hdgst:-false}, 00:21:45.742 "ddgst": ${ddgst:-false} 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 } 00:21:45.742 EOF 00:21:45.742 )") 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:45.742 { 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme$subsystem", 00:21:45.742 "trtype": "$TEST_TRANSPORT", 00:21:45.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "$NVMF_PORT", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:45.742 "hdgst": ${hdgst:-false}, 00:21:45.742 "ddgst": ${ddgst:-false} 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 } 00:21:45.742 EOF 00:21:45.742 )") 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:45.742 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme1", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 },{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme2", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 },{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme3", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 },{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme4", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 },{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme5", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 },{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme6", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 },{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme7", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 },{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme8", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.742 "method": "bdev_nvme_attach_controller" 00:21:45.742 },{ 00:21:45.742 "params": { 00:21:45.742 "name": "Nvme9", 00:21:45.742 "trtype": "tcp", 00:21:45.742 "traddr": "10.0.0.2", 00:21:45.742 "adrfam": "ipv4", 00:21:45.742 "trsvcid": "4420", 00:21:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:45.742 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:45.742 "hdgst": false, 00:21:45.742 "ddgst": false 00:21:45.742 }, 00:21:45.743 "method": "bdev_nvme_attach_controller" 00:21:45.743 },{ 00:21:45.743 "params": { 00:21:45.743 "name": "Nvme10", 00:21:45.743 "trtype": "tcp", 00:21:45.743 "traddr": "10.0.0.2", 00:21:45.743 "adrfam": "ipv4", 00:21:45.743 "trsvcid": "4420", 00:21:45.743 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:45.743 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:45.743 "hdgst": false, 00:21:45.743 "ddgst": false 00:21:45.743 }, 00:21:45.743 "method": "bdev_nvme_attach_controller" 00:21:45.743 }' 00:21:45.743 [2024-11-20 09:55:22.506867] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:21:45.743 [2024-11-20 09:55:22.506962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784064 ] 00:21:45.743 [2024-11-20 09:55:22.578580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.743 [2024-11-20 09:55:22.639358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.641 Running I/O for 10 seconds... 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:47.641 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:47.642 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.900 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:47.900 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:47.900 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:48.158 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3784064 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3784064 ']' 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3784064 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3784064 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3784064' 00:21:48.417 killing process with pid 3784064 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3784064 00:21:48.417 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3784064 00:21:48.417 2112.00 IOPS, 132.00 MiB/s [2024-11-20T08:55:25.331Z] Received shutdown signal, test time was about 1.034040 seconds 00:21:48.417 00:21:48.417 Latency(us) 00:21:48.417 [2024-11-20T08:55:25.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.417 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme1n1 : 0.97 198.21 12.39 0.00 0.00 319330.23 21845.33 260978.92 00:21:48.417 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme2n1 : 1.02 256.29 16.02 0.00 0.00 238563.87 19612.25 243891.01 00:21:48.417 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme3n1 : 1.03 248.00 15.50 0.00 0.00 245851.21 21942.42 250104.79 00:21:48.417 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme4n1 : 1.00 260.84 16.30 0.00 0.00 227641.07 7184.69 257872.02 00:21:48.417 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme5n1 : 0.99 194.91 12.18 0.00 0.00 300406.71 22136.60 276513.37 00:21:48.417 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme6n1 : 1.03 247.78 15.49 0.00 0.00 232556.66 20874.43 260978.92 00:21:48.417 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme7n1 : 1.03 249.70 15.61 0.00 0.00 226418.73 15728.64 262532.36 00:21:48.417 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme8n1 : 0.98 196.51 12.28 0.00 0.00 279919.38 20388.98 253211.69 00:21:48.417 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme9n1 : 1.02 256.01 16.00 0.00 0.00 207129.47 20194.80 248551.35 00:21:48.417 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:48.417 Verification LBA range: start 0x0 length 0x400 00:21:48.417 Nvme10n1 : 0.99 193.67 12.10 0.00 0.00 273217.86 21262.79 281173.71 00:21:48.417 [2024-11-20T08:55:25.331Z] =================================================================================================================== 00:21:48.417 [2024-11-20T08:55:25.331Z] Total : 2301.91 143.87 0.00 0.00 250707.55 7184.69 281173.71 00:21:48.675 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3783884 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.049 rmmod nvme_tcp 00:21:50.049 rmmod nvme_fabrics 00:21:50.049 rmmod nvme_keyring 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3783884 ']' 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3783884 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3783884 ']' 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3783884 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3783884 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3783884' 00:21:50.049 killing process with pid 3783884 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3783884 00:21:50.049 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3783884 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.307 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.854 00:21:52.854 real 0m7.900s 00:21:52.854 user 0m24.418s 00:21:52.854 sys 0m1.574s 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:52.854 ************************************ 00:21:52.854 END TEST nvmf_shutdown_tc2 00:21:52.854 ************************************ 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:52.854 ************************************ 00:21:52.854 START TEST nvmf_shutdown_tc3 00:21:52.854 ************************************ 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:52.854 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:52.855 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:52.855 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:52.855 Found net devices under 0000:09:00.0: cvl_0_0 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:52.855 Found net devices under 0000:09:00.1: cvl_0_1 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.855 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:21:52.856 00:21:52.856 --- 10.0.0.2 ping statistics --- 00:21:52.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.856 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:21:52.856 00:21:52.856 --- 10.0.0.1 ping statistics --- 00:21:52.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.856 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3784982 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3784982 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3784982 ']' 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.856 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.856 [2024-11-20 09:55:29.547683] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:21:52.856 [2024-11-20 09:55:29.547752] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.856 [2024-11-20 09:55:29.618489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.856 [2024-11-20 09:55:29.678352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.856 [2024-11-20 09:55:29.678416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.856 [2024-11-20 09:55:29.678447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.856 [2024-11-20 09:55:29.678460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.856 [2024-11-20 09:55:29.678470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.856 [2024-11-20 09:55:29.680143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.856 [2024-11-20 09:55:29.680210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.856 [2024-11-20 09:55:29.680278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.856 [2024-11-20 09:55:29.680281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.115 [2024-11-20 09:55:29.832967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.115 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.116 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.116 Malloc1 00:21:53.116 [2024-11-20 09:55:29.944498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.116 Malloc2 00:21:53.116 Malloc3 00:21:53.373 Malloc4 00:21:53.373 Malloc5 00:21:53.373 Malloc6 00:21:53.373 Malloc7 00:21:53.373 Malloc8 00:21:53.633 Malloc9 00:21:53.633 Malloc10 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3785160 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3785160 /var/tmp/bdevperf.sock 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3785160 ']' 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.633 { 00:21:53.633 "params": { 00:21:53.633 "name": "Nvme$subsystem", 00:21:53.633 "trtype": "$TEST_TRANSPORT", 00:21:53.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.633 "adrfam": "ipv4", 00:21:53.633 "trsvcid": "$NVMF_PORT", 00:21:53.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.633 "hdgst": ${hdgst:-false}, 00:21:53.633 "ddgst": ${ddgst:-false} 00:21:53.633 }, 00:21:53.633 "method": "bdev_nvme_attach_controller" 00:21:53.633 } 00:21:53.633 EOF 00:21:53.633 )") 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.633 { 00:21:53.633 "params": { 00:21:53.633 "name": "Nvme$subsystem", 00:21:53.633 "trtype": "$TEST_TRANSPORT", 00:21:53.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.633 "adrfam": "ipv4", 00:21:53.633 "trsvcid": "$NVMF_PORT", 00:21:53.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.633 "hdgst": ${hdgst:-false}, 00:21:53.633 "ddgst": ${ddgst:-false} 00:21:53.633 }, 00:21:53.633 "method": "bdev_nvme_attach_controller" 00:21:53.633 } 00:21:53.633 EOF 00:21:53.633 )") 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.633 { 00:21:53.633 "params": { 00:21:53.633 "name": "Nvme$subsystem", 00:21:53.633 "trtype": "$TEST_TRANSPORT", 00:21:53.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.633 "adrfam": "ipv4", 00:21:53.633 "trsvcid": "$NVMF_PORT", 00:21:53.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.633 "hdgst": ${hdgst:-false}, 00:21:53.633 "ddgst": ${ddgst:-false} 00:21:53.633 }, 00:21:53.633 "method": "bdev_nvme_attach_controller" 00:21:53.633 } 00:21:53.633 EOF 00:21:53.633 )") 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.633 { 00:21:53.633 "params": { 00:21:53.633 "name": "Nvme$subsystem", 00:21:53.633 "trtype": "$TEST_TRANSPORT", 00:21:53.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.633 "adrfam": "ipv4", 00:21:53.633 "trsvcid": "$NVMF_PORT", 00:21:53.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.633 "hdgst": ${hdgst:-false}, 00:21:53.633 "ddgst": ${ddgst:-false} 00:21:53.633 }, 00:21:53.633 "method": "bdev_nvme_attach_controller" 00:21:53.633 } 00:21:53.633 EOF 00:21:53.633 )") 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.633 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.633 { 00:21:53.633 "params": { 00:21:53.633 "name": "Nvme$subsystem", 00:21:53.633 "trtype": "$TEST_TRANSPORT", 00:21:53.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.633 "adrfam": "ipv4", 00:21:53.633 "trsvcid": "$NVMF_PORT", 00:21:53.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.633 "hdgst": ${hdgst:-false}, 00:21:53.633 "ddgst": ${ddgst:-false} 00:21:53.633 }, 00:21:53.633 "method": "bdev_nvme_attach_controller" 00:21:53.633 } 00:21:53.633 EOF 00:21:53.633 )") 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.634 { 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme$subsystem", 00:21:53.634 "trtype": "$TEST_TRANSPORT", 00:21:53.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "$NVMF_PORT", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.634 "hdgst": ${hdgst:-false}, 00:21:53.634 "ddgst": ${ddgst:-false} 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 } 00:21:53.634 EOF 00:21:53.634 )") 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.634 { 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme$subsystem", 00:21:53.634 "trtype": "$TEST_TRANSPORT", 00:21:53.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "$NVMF_PORT", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.634 "hdgst": ${hdgst:-false}, 00:21:53.634 "ddgst": ${ddgst:-false} 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 } 00:21:53.634 EOF 00:21:53.634 )") 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.634 { 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme$subsystem", 00:21:53.634 "trtype": "$TEST_TRANSPORT", 00:21:53.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "$NVMF_PORT", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.634 "hdgst": ${hdgst:-false}, 00:21:53.634 "ddgst": ${ddgst:-false} 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 } 00:21:53.634 EOF 00:21:53.634 )") 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.634 { 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme$subsystem", 00:21:53.634 "trtype": "$TEST_TRANSPORT", 00:21:53.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "$NVMF_PORT", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.634 "hdgst": ${hdgst:-false}, 00:21:53.634 "ddgst": ${ddgst:-false} 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 } 00:21:53.634 EOF 00:21:53.634 )") 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:53.634 { 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme$subsystem", 00:21:53.634 "trtype": "$TEST_TRANSPORT", 00:21:53.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "$NVMF_PORT", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:53.634 "hdgst": ${hdgst:-false}, 00:21:53.634 "ddgst": ${ddgst:-false} 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 } 00:21:53.634 EOF 00:21:53.634 )") 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:53.634 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme1", 00:21:53.634 "trtype": "tcp", 00:21:53.634 "traddr": "10.0.0.2", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "4420", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.634 "hdgst": false, 00:21:53.634 "ddgst": false 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 },{ 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme2", 00:21:53.634 "trtype": "tcp", 00:21:53.634 "traddr": "10.0.0.2", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "4420", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:53.634 "hdgst": false, 00:21:53.634 "ddgst": false 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 },{ 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme3", 00:21:53.634 "trtype": "tcp", 00:21:53.634 "traddr": "10.0.0.2", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "4420", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:53.634 "hdgst": false, 00:21:53.634 "ddgst": false 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 },{ 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme4", 00:21:53.634 "trtype": "tcp", 00:21:53.634 "traddr": "10.0.0.2", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "4420", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:53.634 "hdgst": false, 00:21:53.634 "ddgst": false 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 },{ 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme5", 00:21:53.634 "trtype": "tcp", 00:21:53.634 "traddr": "10.0.0.2", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "4420", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:53.634 "hdgst": false, 00:21:53.634 "ddgst": false 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 },{ 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme6", 00:21:53.634 "trtype": "tcp", 00:21:53.634 "traddr": "10.0.0.2", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "4420", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:53.634 "hdgst": false, 00:21:53.634 "ddgst": false 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 },{ 00:21:53.634 "params": { 00:21:53.634 "name": "Nvme7", 00:21:53.634 "trtype": "tcp", 00:21:53.634 "traddr": "10.0.0.2", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "4420", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:53.634 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:53.634 "hdgst": false, 00:21:53.634 "ddgst": false 00:21:53.634 }, 00:21:53.634 "method": "bdev_nvme_attach_controller" 00:21:53.634 },{ 00:21:53.634 "params": { 00:21:53.635 "name": "Nvme8", 00:21:53.635 "trtype": "tcp", 00:21:53.635 "traddr": "10.0.0.2", 00:21:53.635 "adrfam": "ipv4", 00:21:53.635 "trsvcid": "4420", 00:21:53.635 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:53.635 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:53.635 "hdgst": false, 00:21:53.635 "ddgst": false 00:21:53.635 }, 00:21:53.635 "method": "bdev_nvme_attach_controller" 00:21:53.635 },{ 00:21:53.635 "params": { 00:21:53.635 "name": "Nvme9", 00:21:53.635 "trtype": "tcp", 00:21:53.635 "traddr": "10.0.0.2", 00:21:53.635 "adrfam": "ipv4", 00:21:53.635 "trsvcid": "4420", 00:21:53.635 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:53.635 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:53.635 "hdgst": false, 00:21:53.635 "ddgst": false 00:21:53.635 }, 00:21:53.635 "method": "bdev_nvme_attach_controller" 00:21:53.635 },{ 00:21:53.635 "params": { 00:21:53.635 "name": "Nvme10", 00:21:53.635 "trtype": "tcp", 00:21:53.635 "traddr": "10.0.0.2", 00:21:53.635 "adrfam": "ipv4", 00:21:53.635 "trsvcid": "4420", 00:21:53.635 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:53.635 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:53.635 "hdgst": false, 00:21:53.635 "ddgst": false 00:21:53.635 }, 00:21:53.635 "method": "bdev_nvme_attach_controller" 00:21:53.635 }' 00:21:53.635 [2024-11-20 09:55:30.477777] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:21:53.635 [2024-11-20 09:55:30.477872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785160 ] 00:21:53.894 [2024-11-20 09:55:30.549802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.894 [2024-11-20 09:55:30.610647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.792 Running I/O for 10 seconds... 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:55.792 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:56.051 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3784982 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3784982 ']' 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3784982 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.309 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3784982 00:21:56.588 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:56.588 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:56.588 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3784982' 00:21:56.588 killing process with pid 3784982 00:21:56.588 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3784982 00:21:56.588 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3784982 00:21:56.588 [2024-11-20 09:55:33.247184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.247998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.248010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.248021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.248034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.248046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.248058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.248070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b0f00 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.588 [2024-11-20 09:55:33.249599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.249999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.250219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5424e0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.251994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.252006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.252017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.589 [2024-11-20 09:55:33.252029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.252423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b13d0 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.590 [2024-11-20 09:55:33.255982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.255994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.256007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.256020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.256032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.256044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.256058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.256070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.256082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2260 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.258996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.259295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2c00 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.591 [2024-11-20 09:55:33.260682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with t[2024-11-20 09:55:33.260667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:21:56.591 id:0 cdw10:00000000 cdw11:00000000 00:21:56.591 [2024-11-20 09:55:33.260699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 09:55:33.260712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 he state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.260741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.260754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.260767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.260780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.260793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.260806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd0110 is same [2024-11-20 09:55:33.260819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with twith the state(6) to be set 00:21:56.592 he state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-20 09:55:33.260895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 he state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.260921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.260934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.260947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.260960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.260973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.260986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.260997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.260999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bb990 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.261082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 09:55:33.261096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 he state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with t[2024-11-20 09:55:33.261112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(6) to be set 00:21:56.592 id:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.261126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with t[2024-11-20 09:55:33.261127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:21:56.592 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.261140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.261153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.261167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 [2024-11-20 09:55:33.261180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.592 [2024-11-20 09:55:33.261193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ddac0 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-20 09:55:33.261255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:56.592 he state(6) to be set 00:21:56.592 [2024-11-20 09:55:33.261271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with t[2024-11-20 09:55:33.261312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(6) to be set 00:21:56.593 id:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with t[2024-11-20 09:55:33.261329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:21:56.593 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bbb70 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x541c90 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d686f0 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66030 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.261931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21975d0 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.261977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.261997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.262012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.262025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.262040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.262053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.262067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.262080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.262097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5f220 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.262145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.262173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.262188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.262201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.262214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.262226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.593 [2024-11-20 09:55:33.262239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.593 [2024-11-20 09:55:33.262251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68270 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542010 is same with the state(6) to be set 00:21:56.593 [2024-11-20 09:55:33.262746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.262773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.262798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.262814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.262830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.262845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.262861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.262875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.262891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.262905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.262921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.262936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.262951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.262965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.262982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.262996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.263977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.263991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.264007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.264021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.594 [2024-11-20 09:55:33.264036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.594 [2024-11-20 09:55:33.264050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.264837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.264880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.595 [2024-11-20 09:55:33.264997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.595 [2024-11-20 09:55:33.265421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.595 [2024-11-20 09:55:33.265436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.265969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.265984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.596 [2024-11-20 09:55:33.266665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.596 [2024-11-20 09:55:33.266680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.266983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.266997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.267013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.267027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.267890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.267916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.267938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.267954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.267971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.267985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.597 [2024-11-20 09:55:33.268597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.597 [2024-11-20 09:55:33.268612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.268972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.268987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.598 [2024-11-20 09:55:33.269817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.598 [2024-11-20 09:55:33.269832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.269856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.269870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.269885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.269900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.269916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.269930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.269946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.269960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.269999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.599 [2024-11-20 09:55:33.274348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:56.599 [2024-11-20 09:55:33.274401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:56.599 [2024-11-20 09:55:33.274431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d68270 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.274457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66030 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.274484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd0110 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.274513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bb990 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.274574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.599 [2024-11-20 09:55:33.274607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.274626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.599 [2024-11-20 09:55:33.274640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.274654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.599 [2024-11-20 09:55:33.274667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.274689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.599 [2024-11-20 09:55:33.274704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.274717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2198f20 is same with the state(6) to be set 00:21:56.599 [2024-11-20 09:55:33.274748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ddac0 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.274784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bbb70 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.274816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d686f0 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.274847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21975d0 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.274879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5f220 (9): Bad file descriptor 00:21:56.599 [2024-11-20 09:55:33.276193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.276974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.276990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.277004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.599 [2024-11-20 09:55:33.277020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.599 [2024-11-20 09:55:33.277035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.277969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.277985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.600 [2024-11-20 09:55:33.278275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.600 [2024-11-20 09:55:33.278344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.601 [2024-11-20 09:55:33.278519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:56.601 [2024-11-20 09:55:33.279389] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.601 [2024-11-20 09:55:33.279745] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.601 [2024-11-20 09:55:33.281018] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.601 [2024-11-20 09:55:33.281185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.601 [2024-11-20 09:55:33.281216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d66030 with addr=10.0.0.2, port=4420 00:21:56.601 [2024-11-20 09:55:33.281234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66030 is same with the state(6) to be set 00:21:56.601 [2024-11-20 09:55:33.281351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.601 [2024-11-20 09:55:33.281379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d68270 with addr=10.0.0.2, port=4420 00:21:56.601 [2024-11-20 09:55:33.281395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68270 is same with the state(6) to be set 00:21:56.601 [2024-11-20 09:55:33.281468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.601 [2024-11-20 09:55:33.281494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd0110 with addr=10.0.0.2, port=4420 00:21:56.601 [2024-11-20 09:55:33.281510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd0110 is same with the state(6) to be set 00:21:56.601 [2024-11-20 09:55:33.281603] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.601 [2024-11-20 09:55:33.281677] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.601 [2024-11-20 09:55:33.281821] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:56.601 [2024-11-20 09:55:33.281900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:56.601 [2024-11-20 09:55:33.281935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2198f20 (9): Bad file descriptor 00:21:56.601 [2024-11-20 09:55:33.281975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66030 (9): Bad file descriptor 00:21:56.601 [2024-11-20 09:55:33.282002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d68270 (9): Bad file descriptor 00:21:56.601 [2024-11-20 09:55:33.282021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd0110 (9): Bad file descriptor 00:21:56.601 [2024-11-20 09:55:33.282454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:56.601 [2024-11-20 09:55:33.282480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:56.601 [2024-11-20 09:55:33.282499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:56.601 [2024-11-20 09:55:33.282515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:56.601 [2024-11-20 09:55:33.282531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:56.601 [2024-11-20 09:55:33.282544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:56.601 [2024-11-20 09:55:33.282557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:56.601 [2024-11-20 09:55:33.282570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:56.601 [2024-11-20 09:55:33.282594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:56.601 [2024-11-20 09:55:33.282607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:56.601 [2024-11-20 09:55:33.282620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:56.601 [2024-11-20 09:55:33.282633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:56.601 [2024-11-20 09:55:33.282770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.601 [2024-11-20 09:55:33.282798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2198f20 with addr=10.0.0.2, port=4420 00:21:56.601 [2024-11-20 09:55:33.282814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2198f20 is same with the state(6) to be set 00:21:56.601 [2024-11-20 09:55:33.282894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2198f20 (9): Bad file descriptor 00:21:56.601 [2024-11-20 09:55:33.282952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:56.601 [2024-11-20 09:55:33.282971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:56.601 [2024-11-20 09:55:33.282985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:56.601 [2024-11-20 09:55:33.282999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:56.601 [2024-11-20 09:55:33.284513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.601 [2024-11-20 09:55:33.284910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.601 [2024-11-20 09:55:33.284924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.284940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.284954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.284971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.284985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.285971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.285985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.286001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.286015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.602 [2024-11-20 09:55:33.286031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.602 [2024-11-20 09:55:33.286045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.286575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.286590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6c7e0 is same with the state(6) to be set 00:21:56.603 [2024-11-20 09:55:33.287846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.287875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.287897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.287913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.287929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.287943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.287960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.287975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.287991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.603 [2024-11-20 09:55:33.288526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.603 [2024-11-20 09:55:33.288541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.288976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.288991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.604 [2024-11-20 09:55:33.289747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.604 [2024-11-20 09:55:33.289763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.289777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.289793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.289808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.289824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.289842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.289859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6d9b0 is same with the state(6) to be set 00:21:56.605 [2024-11-20 09:55:33.291094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.291971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.291988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.292019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.292050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.292079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.292109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.292139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.292170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.292200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.605 [2024-11-20 09:55:33.292230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.605 [2024-11-20 09:55:33.292248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.292980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.292994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.293011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.293033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.293049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.293063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.293079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.293094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.293108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169750 is same with the state(6) to be set 00:21:56.606 [2024-11-20 09:55:33.294358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.294381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.294402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.294418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.294435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.294450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.294465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.294481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.294497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.294511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.294527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.294542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.606 [2024-11-20 09:55:33.294558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.606 [2024-11-20 09:55:33.294573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.294970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.294985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.607 [2024-11-20 09:55:33.295725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.607 [2024-11-20 09:55:33.295739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.295754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.295769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.295784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.295799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.295814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.295829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.295849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.295864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.295880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.295894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.295910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.295924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.295941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.295955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.295970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.295985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.296358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.296373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c1b0 is same with the state(6) to be set 00:21:56.608 [2024-11-20 09:55:33.297607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.297980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.297997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.298011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.298027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.298041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.298058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.298073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.298089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.298104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.298120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.298136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.298152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.608 [2024-11-20 09:55:33.298167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.608 [2024-11-20 09:55:33.298183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.298982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.298998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.609 [2024-11-20 09:55:33.299360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.609 [2024-11-20 09:55:33.299376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.299390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.299406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.299421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.299440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.299456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.299471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.299486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.299502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.299516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.299532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.299546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.299562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.299577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.299592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.299607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.299622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216d6e0 is same with the state(6) to be set 00:21:56.610 [2024-11-20 09:55:33.300867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.300891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.300912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.300928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.300944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.300958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.300974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.300990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.610 [2024-11-20 09:55:33.301668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.610 [2024-11-20 09:55:33.301681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.301972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.301987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.611 [2024-11-20 09:55:33.302855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.611 [2024-11-20 09:55:33.302869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21700e0 is same with the state(6) to be set 00:21:56.611 [2024-11-20 09:55:33.304889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.304937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.304956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.304974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.305103] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:56.612 [2024-11-20 09:55:33.305132] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:56.612 [2024-11-20 09:55:33.305231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:56.612 task offset: 27008 on job bdev=Nvme3n1 fails 00:21:56.612 00:21:56.612 Latency(us) 00:21:56.612 [2024-11-20T08:55:33.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme1n1 ended in about 0.93 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme1n1 : 0.93 138.02 8.63 69.01 0.00 305866.27 21554.06 262532.36 00:21:56.612 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme2n1 ended in about 0.93 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme2n1 : 0.93 137.54 8.60 68.77 0.00 300806.19 22039.51 265639.25 00:21:56.612 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme3n1 : 0.91 210.65 13.17 70.22 0.00 216206.13 8252.68 264085.81 00:21:56.612 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme4n1 ended in about 0.91 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme4n1 : 0.91 210.40 13.15 70.13 0.00 211931.59 11845.03 273406.48 00:21:56.612 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme5n1 ended in about 0.93 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme5n1 : 0.93 137.06 8.57 68.53 0.00 283691.24 20777.34 281173.71 00:21:56.612 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme6n1 ended in about 0.91 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme6n1 : 0.91 210.14 13.13 70.05 0.00 203160.18 5339.97 236123.78 00:21:56.612 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme7n1 ended in about 0.94 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme7n1 : 0.94 136.59 8.54 68.29 0.00 272842.65 19709.35 257872.02 00:21:56.612 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme8n1 ended in about 0.94 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme8n1 : 0.94 136.12 8.51 68.06 0.00 268087.25 19320.98 245444.46 00:21:56.612 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme9n1 ended in about 0.92 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme9n1 : 0.92 208.56 13.03 69.52 0.00 191592.87 3980.71 251658.24 00:21:56.612 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:56.612 Job: Nvme10n1 ended in about 0.94 seconds with error 00:21:56.612 Verification LBA range: start 0x0 length 0x400 00:21:56.612 Nvme10n1 : 0.94 135.65 8.48 67.83 0.00 257357.75 20777.34 304475.40 00:21:56.612 [2024-11-20T08:55:33.526Z] =================================================================================================================== 00:21:56.612 [2024-11-20T08:55:33.526Z] Total : 1660.74 103.80 690.41 0.00 245809.33 3980.71 304475.40 00:21:56.612 [2024-11-20 09:55:33.333150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:56.612 [2024-11-20 09:55:33.333244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.333537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.333575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d686f0 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.333596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d686f0 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.333688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.333717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5f220 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.333733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5f220 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.333830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.333857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21975d0 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.333873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21975d0 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.333965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.333991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bbb70 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.334007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bbb70 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.335709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.335741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.335768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.335785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:56.612 [2024-11-20 09:55:33.335944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.335974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bb990 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.335991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bb990 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.336094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.336120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ddac0 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.336136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ddac0 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.336161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d686f0 (9): Bad file descriptor 00:21:56.612 [2024-11-20 09:55:33.336185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5f220 (9): Bad file descriptor 00:21:56.612 [2024-11-20 09:55:33.336205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21975d0 (9): Bad file descriptor 00:21:56.612 [2024-11-20 09:55:33.336223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bbb70 (9): Bad file descriptor 00:21:56.612 [2024-11-20 09:55:33.336295] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:56.612 [2024-11-20 09:55:33.336330] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:56.612 [2024-11-20 09:55:33.336353] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:56.612 [2024-11-20 09:55:33.336376] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:56.612 [2024-11-20 09:55:33.336582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.336612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd0110 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.336628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd0110 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.336701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.336727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d68270 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.336748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68270 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.336838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.336864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d66030 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.336881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66030 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.336963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.612 [2024-11-20 09:55:33.336989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2198f20 with addr=10.0.0.2, port=4420 00:21:56.612 [2024-11-20 09:55:33.337005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2198f20 is same with the state(6) to be set 00:21:56.612 [2024-11-20 09:55:33.337024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bb990 (9): Bad file descriptor 00:21:56.612 [2024-11-20 09:55:33.337044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ddac0 (9): Bad file descriptor 00:21:56.612 [2024-11-20 09:55:33.337062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:56.612 [2024-11-20 09:55:33.337076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:56.612 [2024-11-20 09:55:33.337093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:56.612 [2024-11-20 09:55:33.337110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:56.612 [2024-11-20 09:55:33.337126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:56.612 [2024-11-20 09:55:33.337139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:56.612 [2024-11-20 09:55:33.337152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:56.612 [2024-11-20 09:55:33.337165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:56.613 [2024-11-20 09:55:33.337179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:56.613 [2024-11-20 09:55:33.337192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:56.613 [2024-11-20 09:55:33.337205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:56.613 [2024-11-20 09:55:33.337218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:56.613 [2024-11-20 09:55:33.337231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:56.613 [2024-11-20 09:55:33.337244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:56.613 [2024-11-20 09:55:33.337256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:56.613 [2024-11-20 09:55:33.337269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:56.613 [2024-11-20 09:55:33.337397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd0110 (9): Bad file descriptor 00:21:56.613 [2024-11-20 09:55:33.337425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d68270 (9): Bad file descriptor 00:21:56.613 [2024-11-20 09:55:33.337443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66030 (9): Bad file descriptor 00:21:56.613 [2024-11-20 09:55:33.337462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2198f20 (9): Bad file descriptor 00:21:56.613 [2024-11-20 09:55:33.337484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:56.613 [2024-11-20 09:55:33.337499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:56.613 [2024-11-20 09:55:33.337512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:56.613 [2024-11-20 09:55:33.337526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:56.613 [2024-11-20 09:55:33.337540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:56.613 [2024-11-20 09:55:33.337553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:56.613 [2024-11-20 09:55:33.337566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:56.613 [2024-11-20 09:55:33.337579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:56.613 [2024-11-20 09:55:33.337619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:56.613 [2024-11-20 09:55:33.337638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:56.613 [2024-11-20 09:55:33.337652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:56.613 [2024-11-20 09:55:33.337666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:56.613 [2024-11-20 09:55:33.337681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:56.613 [2024-11-20 09:55:33.337693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:56.613 [2024-11-20 09:55:33.337706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:56.613 [2024-11-20 09:55:33.337719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:56.613 [2024-11-20 09:55:33.337732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:56.613 [2024-11-20 09:55:33.337744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:56.613 [2024-11-20 09:55:33.337758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:56.613 [2024-11-20 09:55:33.337770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:56.613 [2024-11-20 09:55:33.337783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:56.613 [2024-11-20 09:55:33.337795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:56.613 [2024-11-20 09:55:33.337809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:56.613 [2024-11-20 09:55:33.337822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:56.872 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3785160 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3785160 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3785160 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.249 rmmod nvme_tcp 00:21:58.249 rmmod nvme_fabrics 00:21:58.249 rmmod nvme_keyring 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3784982 ']' 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3784982 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3784982 ']' 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3784982 00:21:58.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3784982) - No such process 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3784982 is not found' 00:21:58.249 Process with pid 3784982 is not found 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.249 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.158 00:22:00.158 real 0m7.605s 00:22:00.158 user 0m18.924s 00:22:00.158 sys 0m1.537s 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:00.158 ************************************ 00:22:00.158 END TEST nvmf_shutdown_tc3 00:22:00.158 ************************************ 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:00.158 ************************************ 00:22:00.158 START TEST nvmf_shutdown_tc4 00:22:00.158 ************************************ 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:00.158 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:00.158 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:00.159 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:00.159 Found net devices under 0000:09:00.0: cvl_0_0 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:00.159 Found net devices under 0000:09:00.1: cvl_0_1 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.159 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.159 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.159 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.159 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:00.159 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:00.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:22:00.418 00:22:00.418 --- 10.0.0.2 ping statistics --- 00:22:00.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.418 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:22:00.418 00:22:00.418 --- 10.0.0.1 ping statistics --- 00:22:00.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.418 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:00.418 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3785993 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3785993 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3785993 ']' 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.419 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:00.419 [2024-11-20 09:55:37.206182] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:00.419 [2024-11-20 09:55:37.206258] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.419 [2024-11-20 09:55:37.279459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.678 [2024-11-20 09:55:37.341928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.678 [2024-11-20 09:55:37.341977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.678 [2024-11-20 09:55:37.341991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.678 [2024-11-20 09:55:37.342001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.678 [2024-11-20 09:55:37.342011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.678 [2024-11-20 09:55:37.343675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.678 [2024-11-20 09:55:37.343736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.678 [2024-11-20 09:55:37.343804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:00.678 [2024-11-20 09:55:37.343807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:00.678 [2024-11-20 09:55:37.504035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.678 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:00.678 Malloc1 00:22:00.937 [2024-11-20 09:55:37.607238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.937 Malloc2 00:22:00.937 Malloc3 00:22:00.937 Malloc4 00:22:00.937 Malloc5 00:22:00.937 Malloc6 00:22:01.195 Malloc7 00:22:01.195 Malloc8 00:22:01.195 Malloc9 00:22:01.195 Malloc10 00:22:01.195 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.195 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:01.195 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.195 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:01.195 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3786141 00:22:01.195 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:01.195 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:01.453 [2024-11-20 09:55:38.137965] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3785993 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3785993 ']' 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3785993 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3785993 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3785993' 00:22:06.729 killing process with pid 3785993 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3785993 00:22:06.729 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3785993 00:22:06.729 [2024-11-20 09:55:43.132409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31c30 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.132492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31c30 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.133798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d325f0 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.133847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d325f0 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.133865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d325f0 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.133878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d325f0 is same with the state(6) to be set 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 [2024-11-20 09:55:43.134891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.729 NVMe io qpair process completion error 00:22:06.729 [2024-11-20 09:55:43.136475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7e310 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.136507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7e310 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.136522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7e310 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.136535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7e310 is same with the state(6) to be set 00:22:06.729 [2024-11-20 09:55:43.136570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7e310 is same with the state(6) to be set 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.729 starting I/O failed: -6 00:22:06.729 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 [2024-11-20 09:55:43.143180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 [2024-11-20 09:55:43.144299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 [2024-11-20 09:55:43.145608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.730 starting I/O failed: -6 00:22:06.730 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 [2024-11-20 09:55:43.147375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.731 NVMe io qpair process completion error 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 [2024-11-20 09:55:43.151456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.731 [2024-11-20 09:55:43.151467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 [2024-11-20 09:55:43.151620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 starting I/O failed: -6 00:22:06.731 [2024-11-20 09:55:43.151633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 [2024-11-20 09:55:43.151646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 [2024-11-20 09:55:43.151658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 starting I/O failed: -6 00:22:06.731 [2024-11-20 09:55:43.151671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 [2024-11-20 09:55:43.151683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 [2024-11-20 09:55:43.151698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 [2024-11-20 09:55:43.151711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1df80 is same with the state(6) to be set 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.731 starting I/O failed: -6 00:22:06.731 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 [2024-11-20 09:55:43.152216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e450 is same with the state(6) to be set 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 [2024-11-20 09:55:43.152250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e450 is same with the state(6) to be set 00:22:06.732 starting I/O failed: -6 00:22:06.732 [2024-11-20 09:55:43.152267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e450 is same with the state(6) to be set 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 [2024-11-20 09:55:43.152281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e450 is same with the state(6) to be set 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 [2024-11-20 09:55:43.152355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d5e0 is same with the state(6) to be set 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 [2024-11-20 09:55:43.152385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d5e0 is same with the state(6) to be set 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 [2024-11-20 09:55:43.152400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d5e0 is same with the state(6) to be set 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 [2024-11-20 09:55:43.152413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d5e0 is same with the state(6) to be set 00:22:06.732 starting I/O failed: -6 00:22:06.732 [2024-11-20 09:55:43.152426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d5e0 is same with the state(6) to be set 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 [2024-11-20 09:55:43.152438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d5e0 is same with the state(6) to be set 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 [2024-11-20 09:55:43.152489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 [2024-11-20 09:55:43.153886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.732 Write completed with error (sct=0, sc=8) 00:22:06.732 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 [2024-11-20 09:55:43.155410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.733 NVMe io qpair process completion error 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 [2024-11-20 09:55:43.156727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 [2024-11-20 09:55:43.157776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.733 Write completed with error (sct=0, sc=8) 00:22:06.733 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 [2024-11-20 09:55:43.158884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.734 starting I/O failed: -6 00:22:06.734 starting I/O failed: -6 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 [2024-11-20 09:55:43.161006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.734 NVMe io qpair process completion error 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 Write completed with error (sct=0, sc=8) 00:22:06.734 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 [2024-11-20 09:55:43.162358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 [2024-11-20 09:55:43.163362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 [2024-11-20 09:55:43.164497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.735 Write completed with error (sct=0, sc=8) 00:22:06.735 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 [2024-11-20 09:55:43.166535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.736 NVMe io qpair process completion error 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 [2024-11-20 09:55:43.167920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 [2024-11-20 09:55:43.168845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 Write completed with error (sct=0, sc=8) 00:22:06.736 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 [2024-11-20 09:55:43.170051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 [2024-11-20 09:55:43.173996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.737 NVMe io qpair process completion error 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 starting I/O failed: -6 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.737 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 [2024-11-20 09:55:43.175436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 [2024-11-20 09:55:43.176488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 [2024-11-20 09:55:43.177591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.738 Write completed with error (sct=0, sc=8) 00:22:06.738 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 starting I/O failed: -6 00:22:06.739 [2024-11-20 09:55:43.180189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.739 NVMe io qpair process completion error 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.739 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 [2024-11-20 09:55:43.184126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 [2024-11-20 09:55:43.185168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.740 Write completed with error (sct=0, sc=8) 00:22:06.740 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 [2024-11-20 09:55:43.186314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 [2024-11-20 09:55:43.188607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.741 NVMe io qpair process completion error 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 [2024-11-20 09:55:43.189814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.741 starting I/O failed: -6 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 Write completed with error (sct=0, sc=8) 00:22:06.741 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 [2024-11-20 09:55:43.190884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 [2024-11-20 09:55:43.192062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.742 Write completed with error (sct=0, sc=8) 00:22:06.742 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 [2024-11-20 09:55:43.194480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.743 NVMe io qpair process completion error 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.743 starting I/O failed: -6 00:22:06.743 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 Write completed with error (sct=0, sc=8) 00:22:06.744 starting I/O failed: -6 00:22:06.744 [2024-11-20 09:55:43.200968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.744 NVMe io qpair process completion error 00:22:06.744 Initializing NVMe Controllers 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:06.744 Controller IO queue size 128, less than required. 00:22:06.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:06.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:06.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:06.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:06.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:06.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:06.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:06.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:06.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:06.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:06.745 Initialization complete. Launching workers. 00:22:06.745 ======================================================== 00:22:06.745 Latency(us) 00:22:06.745 Device Information : IOPS MiB/s Average min max 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1865.97 80.18 68618.17 872.62 130548.44 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1700.90 73.09 75224.93 1204.57 120765.00 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1754.12 75.37 73015.76 942.29 133934.14 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1767.75 75.96 72484.53 860.73 118469.33 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1766.89 75.92 71735.81 988.91 118265.05 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1772.73 76.17 72252.50 959.70 137488.15 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1769.48 76.03 71644.74 942.76 119281.54 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1785.49 76.72 71025.24 1104.36 121146.27 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1806.05 77.60 70247.45 895.12 123933.55 00:22:06.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1777.92 76.40 71389.71 745.88 126493.87 00:22:06.745 ======================================================== 00:22:06.745 Total : 17767.32 763.44 71728.42 745.88 137488.15 00:22:06.745 00:22:06.745 [2024-11-20 09:55:43.206525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11506b0 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.206622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1152720 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.206683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151920 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.206748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151c50 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.206807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1152ae0 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.206865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11515f0 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.206923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11509e0 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.206981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11512c0 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.207038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1150d10 is same with the state(6) to be set 00:22:06.745 [2024-11-20 09:55:43.207098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1152900 is same with the state(6) to be set 00:22:06.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:06.745 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3786141 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3786141 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3786141 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:07.740 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.025 rmmod nvme_tcp 00:22:08.025 rmmod nvme_fabrics 00:22:08.025 rmmod nvme_keyring 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:08.025 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3785993 ']' 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3785993 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3785993 ']' 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3785993 00:22:08.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3785993) - No such process 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3785993 is not found' 00:22:08.026 Process with pid 3785993 is not found 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.026 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.930 00:22:09.930 real 0m9.777s 00:22:09.930 user 0m24.294s 00:22:09.930 sys 0m5.500s 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:09.930 ************************************ 00:22:09.930 END TEST nvmf_shutdown_tc4 00:22:09.930 ************************************ 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:09.930 00:22:09.930 real 0m37.625s 00:22:09.930 user 1m42.743s 00:22:09.930 sys 0m12.102s 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:09.930 ************************************ 00:22:09.930 END TEST nvmf_shutdown 00:22:09.930 ************************************ 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:09.930 ************************************ 00:22:09.930 START TEST nvmf_nsid 00:22:09.930 ************************************ 00:22:09.930 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:10.189 * Looking for test storage... 00:22:10.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.189 --rc genhtml_branch_coverage=1 00:22:10.189 --rc genhtml_function_coverage=1 00:22:10.189 --rc genhtml_legend=1 00:22:10.189 --rc geninfo_all_blocks=1 00:22:10.189 --rc geninfo_unexecuted_blocks=1 00:22:10.189 00:22:10.189 ' 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.189 --rc genhtml_branch_coverage=1 00:22:10.189 --rc genhtml_function_coverage=1 00:22:10.189 --rc genhtml_legend=1 00:22:10.189 --rc geninfo_all_blocks=1 00:22:10.189 --rc geninfo_unexecuted_blocks=1 00:22:10.189 00:22:10.189 ' 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.189 --rc genhtml_branch_coverage=1 00:22:10.189 --rc genhtml_function_coverage=1 00:22:10.189 --rc genhtml_legend=1 00:22:10.189 --rc geninfo_all_blocks=1 00:22:10.189 --rc geninfo_unexecuted_blocks=1 00:22:10.189 00:22:10.189 ' 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.189 --rc genhtml_branch_coverage=1 00:22:10.189 --rc genhtml_function_coverage=1 00:22:10.189 --rc genhtml_legend=1 00:22:10.189 --rc geninfo_all_blocks=1 00:22:10.189 --rc geninfo_unexecuted_blocks=1 00:22:10.189 00:22:10.189 ' 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.189 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.190 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:12.727 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:12.727 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:12.727 Found net devices under 0000:09:00.0: cvl_0_0 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:12.727 Found net devices under 0000:09:00.1: cvl_0_1 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.727 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:22:12.728 00:22:12.728 --- 10.0.0.2 ping statistics --- 00:22:12.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.728 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:22:12.728 00:22:12.728 --- 10.0.0.1 ping statistics --- 00:22:12.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.728 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3788883 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3788883 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3788883 ']' 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:12.728 [2024-11-20 09:55:49.335596] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:12.728 [2024-11-20 09:55:49.335716] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.728 [2024-11-20 09:55:49.406904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.728 [2024-11-20 09:55:49.460026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.728 [2024-11-20 09:55:49.460084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.728 [2024-11-20 09:55:49.460107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.728 [2024-11-20 09:55:49.460117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.728 [2024-11-20 09:55:49.460128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.728 [2024-11-20 09:55:49.460729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3788903 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=afec98c3-c912-4518-8fad-5de9be2ff3db 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fe330c5e-ebb2-4cfc-8b67-7c04a6dc0a6a 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=55ef1fb9-f93e-4aea-b0b6-41ebbfb66a12 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.728 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:12.728 null0 00:22:12.728 null1 00:22:12.728 null2 00:22:12.987 [2024-11-20 09:55:49.639876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.987 [2024-11-20 09:55:49.653237] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:12.987 [2024-11-20 09:55:49.653335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3788903 ] 00:22:12.987 [2024-11-20 09:55:49.664078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.987 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.987 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3788903 /var/tmp/tgt2.sock 00:22:12.987 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3788903 ']' 00:22:12.987 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:12.987 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.987 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:12.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:12.987 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.987 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:12.987 [2024-11-20 09:55:49.721177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.987 [2024-11-20 09:55:49.780254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.245 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.245 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:13.245 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:13.811 [2024-11-20 09:55:50.488472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.811 [2024-11-20 09:55:50.504683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:13.811 nvme0n1 nvme0n2 00:22:13.811 nvme1n1 00:22:13.811 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:13.811 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:13.811 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:14.377 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid afec98c3-c912-4518-8fad-5de9be2ff3db 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=afec98c3c91245188fad5de9be2ff3db 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AFEC98C3C91245188FAD5DE9BE2FF3DB 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ AFEC98C3C91245188FAD5DE9BE2FF3DB == \A\F\E\C\9\8\C\3\C\9\1\2\4\5\1\8\8\F\A\D\5\D\E\9\B\E\2\F\F\3\D\B ]] 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fe330c5e-ebb2-4cfc-8b67-7c04a6dc0a6a 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:15.312 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fe330c5eebb24cfc8b677c04a6dc0a6a 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FE330C5EEBB24CFC8B677C04A6DC0A6A 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FE330C5EEBB24CFC8B677C04A6DC0A6A == \F\E\3\3\0\C\5\E\E\B\B\2\4\C\F\C\8\B\6\7\7\C\0\4\A\6\D\C\0\A\6\A ]] 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 55ef1fb9-f93e-4aea-b0b6-41ebbfb66a12 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=55ef1fb9f93e4aeab0b641ebbfb66a12 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 55EF1FB9F93E4AEAB0B641EBBFB66A12 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 55EF1FB9F93E4AEAB0B641EBBFB66A12 == \5\5\E\F\1\F\B\9\F\9\3\E\4\A\E\A\B\0\B\6\4\1\E\B\B\F\B\6\6\A\1\2 ]] 00:22:15.570 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3788903 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3788903 ']' 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3788903 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3788903 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3788903' 00:22:15.828 killing process with pid 3788903 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3788903 00:22:15.828 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3788903 00:22:16.086 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:16.086 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.086 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:16.086 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.086 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:16.086 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.086 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.086 rmmod nvme_tcp 00:22:16.086 rmmod nvme_fabrics 00:22:16.086 rmmod nvme_keyring 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3788883 ']' 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3788883 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3788883 ']' 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3788883 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3788883 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3788883' 00:22:16.344 killing process with pid 3788883 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3788883 00:22:16.344 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3788883 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.603 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.511 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:18.511 00:22:18.511 real 0m8.507s 00:22:18.511 user 0m8.446s 00:22:18.511 sys 0m2.723s 00:22:18.511 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.511 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 ************************************ 00:22:18.511 END TEST nvmf_nsid 00:22:18.511 ************************************ 00:22:18.511 09:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:18.511 00:22:18.511 real 11m47.652s 00:22:18.511 user 28m9.559s 00:22:18.511 sys 2m46.034s 00:22:18.511 09:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.511 09:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 ************************************ 00:22:18.511 END TEST nvmf_target_extra 00:22:18.511 ************************************ 00:22:18.511 09:55:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:18.511 09:55:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:18.511 09:55:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.511 09:55:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:18.511 ************************************ 00:22:18.511 START TEST nvmf_host 00:22:18.511 ************************************ 00:22:18.511 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:18.770 * Looking for test storage... 00:22:18.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:18.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.770 --rc genhtml_branch_coverage=1 00:22:18.770 --rc genhtml_function_coverage=1 00:22:18.770 --rc genhtml_legend=1 00:22:18.770 --rc geninfo_all_blocks=1 00:22:18.770 --rc geninfo_unexecuted_blocks=1 00:22:18.770 00:22:18.770 ' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:18.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.770 --rc genhtml_branch_coverage=1 00:22:18.770 --rc genhtml_function_coverage=1 00:22:18.770 --rc genhtml_legend=1 00:22:18.770 --rc geninfo_all_blocks=1 00:22:18.770 --rc geninfo_unexecuted_blocks=1 00:22:18.770 00:22:18.770 ' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:18.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.770 --rc genhtml_branch_coverage=1 00:22:18.770 --rc genhtml_function_coverage=1 00:22:18.770 --rc genhtml_legend=1 00:22:18.770 --rc geninfo_all_blocks=1 00:22:18.770 --rc geninfo_unexecuted_blocks=1 00:22:18.770 00:22:18.770 ' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:18.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.770 --rc genhtml_branch_coverage=1 00:22:18.770 --rc genhtml_function_coverage=1 00:22:18.770 --rc genhtml_legend=1 00:22:18.770 --rc geninfo_all_blocks=1 00:22:18.770 --rc geninfo_unexecuted_blocks=1 00:22:18.770 00:22:18.770 ' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.770 ************************************ 00:22:18.770 START TEST nvmf_multicontroller 00:22:18.770 ************************************ 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:18.770 * Looking for test storage... 00:22:18.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:18.770 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:19.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.029 --rc genhtml_branch_coverage=1 00:22:19.029 --rc genhtml_function_coverage=1 00:22:19.029 --rc genhtml_legend=1 00:22:19.029 --rc geninfo_all_blocks=1 00:22:19.029 --rc geninfo_unexecuted_blocks=1 00:22:19.029 00:22:19.029 ' 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:19.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.029 --rc genhtml_branch_coverage=1 00:22:19.029 --rc genhtml_function_coverage=1 00:22:19.029 --rc genhtml_legend=1 00:22:19.029 --rc geninfo_all_blocks=1 00:22:19.029 --rc geninfo_unexecuted_blocks=1 00:22:19.029 00:22:19.029 ' 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:19.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.029 --rc genhtml_branch_coverage=1 00:22:19.029 --rc genhtml_function_coverage=1 00:22:19.029 --rc genhtml_legend=1 00:22:19.029 --rc geninfo_all_blocks=1 00:22:19.029 --rc geninfo_unexecuted_blocks=1 00:22:19.029 00:22:19.029 ' 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:19.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.029 --rc genhtml_branch_coverage=1 00:22:19.029 --rc genhtml_function_coverage=1 00:22:19.029 --rc genhtml_legend=1 00:22:19.029 --rc geninfo_all_blocks=1 00:22:19.029 --rc geninfo_unexecuted_blocks=1 00:22:19.029 00:22:19.029 ' 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.029 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.030 09:55:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:21.560 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:21.560 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:21.561 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:21.561 Found net devices under 0000:09:00.0: cvl_0_0 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:21.561 Found net devices under 0000:09:00.1: cvl_0_1 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:22:21.561 00:22:21.561 --- 10.0.0.2 ping statistics --- 00:22:21.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.561 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:22:21.561 00:22:21.561 --- 10.0.0.1 ping statistics --- 00:22:21.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.561 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.561 09:55:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3791457 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3791457 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3791457 ']' 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.561 [2024-11-20 09:55:58.078422] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:21.561 [2024-11-20 09:55:58.078518] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.561 [2024-11-20 09:55:58.151825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:21.561 [2024-11-20 09:55:58.206633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.561 [2024-11-20 09:55:58.206686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.561 [2024-11-20 09:55:58.206714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.561 [2024-11-20 09:55:58.206725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.561 [2024-11-20 09:55:58.206734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.561 [2024-11-20 09:55:58.208127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.561 [2024-11-20 09:55:58.208238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.561 [2024-11-20 09:55:58.208243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.561 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 [2024-11-20 09:55:58.353891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 Malloc0 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 [2024-11-20 09:55:58.417925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 [2024-11-20 09:55:58.425793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 Malloc1 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.562 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3791489 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3791489 /var/tmp/bdevperf.sock 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3791489 ']' 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.820 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.079 NVMe0n1 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.079 1 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.079 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.079 request: 00:22:22.079 { 00:22:22.079 "name": "NVMe0", 00:22:22.079 "trtype": "tcp", 00:22:22.079 "traddr": "10.0.0.2", 00:22:22.079 "adrfam": "ipv4", 00:22:22.079 "trsvcid": "4420", 00:22:22.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.079 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:22.079 "hostaddr": "10.0.0.1", 00:22:22.079 "prchk_reftag": false, 00:22:22.079 "prchk_guard": false, 00:22:22.079 "hdgst": false, 00:22:22.079 "ddgst": false, 00:22:22.079 "allow_unrecognized_csi": false, 00:22:22.079 "method": "bdev_nvme_attach_controller", 00:22:22.079 "req_id": 1 00:22:22.079 } 00:22:22.079 Got JSON-RPC error response 00:22:22.079 response: 00:22:22.079 { 00:22:22.079 "code": -114, 00:22:22.080 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:22.080 } 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.080 request: 00:22:22.080 { 00:22:22.080 "name": "NVMe0", 00:22:22.080 "trtype": "tcp", 00:22:22.080 "traddr": "10.0.0.2", 00:22:22.080 "adrfam": "ipv4", 00:22:22.080 "trsvcid": "4420", 00:22:22.080 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:22.080 "hostaddr": "10.0.0.1", 00:22:22.080 "prchk_reftag": false, 00:22:22.080 "prchk_guard": false, 00:22:22.080 "hdgst": false, 00:22:22.080 "ddgst": false, 00:22:22.080 "allow_unrecognized_csi": false, 00:22:22.080 "method": "bdev_nvme_attach_controller", 00:22:22.080 "req_id": 1 00:22:22.080 } 00:22:22.080 Got JSON-RPC error response 00:22:22.080 response: 00:22:22.080 { 00:22:22.080 "code": -114, 00:22:22.080 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:22.080 } 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.080 request: 00:22:22.080 { 00:22:22.080 "name": "NVMe0", 00:22:22.080 "trtype": "tcp", 00:22:22.080 "traddr": "10.0.0.2", 00:22:22.080 "adrfam": "ipv4", 00:22:22.080 "trsvcid": "4420", 00:22:22.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.080 "hostaddr": "10.0.0.1", 00:22:22.080 "prchk_reftag": false, 00:22:22.080 "prchk_guard": false, 00:22:22.080 "hdgst": false, 00:22:22.080 "ddgst": false, 00:22:22.080 "multipath": "disable", 00:22:22.080 "allow_unrecognized_csi": false, 00:22:22.080 "method": "bdev_nvme_attach_controller", 00:22:22.080 "req_id": 1 00:22:22.080 } 00:22:22.080 Got JSON-RPC error response 00:22:22.080 response: 00:22:22.080 { 00:22:22.080 "code": -114, 00:22:22.080 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:22.080 } 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.080 request: 00:22:22.080 { 00:22:22.080 "name": "NVMe0", 00:22:22.080 "trtype": "tcp", 00:22:22.080 "traddr": "10.0.0.2", 00:22:22.080 "adrfam": "ipv4", 00:22:22.080 "trsvcid": "4420", 00:22:22.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.080 "hostaddr": "10.0.0.1", 00:22:22.080 "prchk_reftag": false, 00:22:22.080 "prchk_guard": false, 00:22:22.080 "hdgst": false, 00:22:22.080 "ddgst": false, 00:22:22.080 "multipath": "failover", 00:22:22.080 "allow_unrecognized_csi": false, 00:22:22.080 "method": "bdev_nvme_attach_controller", 00:22:22.080 "req_id": 1 00:22:22.080 } 00:22:22.080 Got JSON-RPC error response 00:22:22.080 response: 00:22:22.080 { 00:22:22.080 "code": -114, 00:22:22.080 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:22.080 } 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.080 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.081 NVMe0n1 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.081 09:55:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.338 00:22:22.338 09:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.338 09:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:22.338 09:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:22.338 09:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.338 09:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.338 09:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.338 09:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:22.338 09:55:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.710 { 00:22:23.710 "results": [ 00:22:23.710 { 00:22:23.710 "job": "NVMe0n1", 00:22:23.710 "core_mask": "0x1", 00:22:23.710 "workload": "write", 00:22:23.710 "status": "finished", 00:22:23.710 "queue_depth": 128, 00:22:23.710 "io_size": 4096, 00:22:23.710 "runtime": 1.006372, 00:22:23.710 "iops": 18244.744488121687, 00:22:23.710 "mibps": 71.26853315672534, 00:22:23.710 "io_failed": 0, 00:22:23.710 "io_timeout": 0, 00:22:23.710 "avg_latency_us": 7004.4884255880515, 00:22:23.710 "min_latency_us": 3422.4355555555558, 00:22:23.710 "max_latency_us": 18544.26074074074 00:22:23.710 } 00:22:23.710 ], 00:22:23.710 "core_count": 1 00:22:23.710 } 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3791489 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3791489 ']' 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3791489 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3791489 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3791489' 00:22:23.710 killing process with pid 3791489 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3791489 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3791489 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:23.710 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:23.710 [2024-11-20 09:55:58.534907] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:23.710 [2024-11-20 09:55:58.534998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3791489 ] 00:22:23.710 [2024-11-20 09:55:58.604475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.710 [2024-11-20 09:55:58.663867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.710 [2024-11-20 09:55:59.168840] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 60f7693d-34b3-4c62-aff5-a8bc1fd2d46c already exists 00:22:23.710 [2024-11-20 09:55:59.168875] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:60f7693d-34b3-4c62-aff5-a8bc1fd2d46c alias for bdev NVMe1n1 00:22:23.710 [2024-11-20 09:55:59.168905] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:23.710 Running I/O for 1 seconds... 00:22:23.710 18233.00 IOPS, 71.22 MiB/s 00:22:23.710 Latency(us) 00:22:23.710 [2024-11-20T08:56:00.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.710 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:23.710 NVMe0n1 : 1.01 18244.74 71.27 0.00 0.00 7004.49 3422.44 18544.26 00:22:23.710 [2024-11-20T08:56:00.624Z] =================================================================================================================== 00:22:23.710 [2024-11-20T08:56:00.624Z] Total : 18244.74 71.27 0.00 0.00 7004.49 3422.44 18544.26 00:22:23.710 Received shutdown signal, test time was about 1.000000 seconds 00:22:23.710 00:22:23.710 Latency(us) 00:22:23.710 [2024-11-20T08:56:00.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.710 [2024-11-20T08:56:00.624Z] =================================================================================================================== 00:22:23.710 [2024-11-20T08:56:00.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.710 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.710 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.710 rmmod nvme_tcp 00:22:23.710 rmmod nvme_fabrics 00:22:23.968 rmmod nvme_keyring 00:22:23.968 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.968 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:23.968 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:23.968 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3791457 ']' 00:22:23.968 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3791457 00:22:23.968 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3791457 ']' 00:22:23.968 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3791457 00:22:23.969 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:23.969 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.969 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3791457 00:22:23.969 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:23.969 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:23.969 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3791457' 00:22:23.969 killing process with pid 3791457 00:22:23.969 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3791457 00:22:23.969 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3791457 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.228 09:56:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.133 09:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.133 00:22:26.133 real 0m7.382s 00:22:26.133 user 0m11.072s 00:22:26.133 sys 0m2.397s 00:22:26.133 09:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.133 09:56:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.133 ************************************ 00:22:26.133 END TEST nvmf_multicontroller 00:22:26.133 ************************************ 00:22:26.133 09:56:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:26.133 09:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.133 09:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.133 09:56:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.133 ************************************ 00:22:26.133 START TEST nvmf_aer 00:22:26.133 ************************************ 00:22:26.133 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:26.392 * Looking for test storage... 00:22:26.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.392 --rc genhtml_branch_coverage=1 00:22:26.392 --rc genhtml_function_coverage=1 00:22:26.392 --rc genhtml_legend=1 00:22:26.392 --rc geninfo_all_blocks=1 00:22:26.392 --rc geninfo_unexecuted_blocks=1 00:22:26.392 00:22:26.392 ' 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.392 --rc genhtml_branch_coverage=1 00:22:26.392 --rc genhtml_function_coverage=1 00:22:26.392 --rc genhtml_legend=1 00:22:26.392 --rc geninfo_all_blocks=1 00:22:26.392 --rc geninfo_unexecuted_blocks=1 00:22:26.392 00:22:26.392 ' 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.392 --rc genhtml_branch_coverage=1 00:22:26.392 --rc genhtml_function_coverage=1 00:22:26.392 --rc genhtml_legend=1 00:22:26.392 --rc geninfo_all_blocks=1 00:22:26.392 --rc geninfo_unexecuted_blocks=1 00:22:26.392 00:22:26.392 ' 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.392 --rc genhtml_branch_coverage=1 00:22:26.392 --rc genhtml_function_coverage=1 00:22:26.392 --rc genhtml_legend=1 00:22:26.392 --rc geninfo_all_blocks=1 00:22:26.392 --rc geninfo_unexecuted_blocks=1 00:22:26.392 00:22:26.392 ' 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.392 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.393 09:56:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:28.925 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:28.925 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:28.925 Found net devices under 0000:09:00.0: cvl_0_0 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:28.925 Found net devices under 0000:09:00.1: cvl_0_1 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:22:28.925 00:22:28.925 --- 10.0.0.2 ping statistics --- 00:22:28.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.925 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:22:28.925 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:28.925 00:22:28.925 --- 10.0.0.1 ping statistics --- 00:22:28.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.926 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3793713 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3793713 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3793713 ']' 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 [2024-11-20 09:56:05.570590] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:28.926 [2024-11-20 09:56:05.570682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.926 [2024-11-20 09:56:05.643392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.926 [2024-11-20 09:56:05.701993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.926 [2024-11-20 09:56:05.702043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.926 [2024-11-20 09:56:05.702057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.926 [2024-11-20 09:56:05.702068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.926 [2024-11-20 09:56:05.702077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.926 [2024-11-20 09:56:05.703686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.926 [2024-11-20 09:56:05.703738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.926 [2024-11-20 09:56:05.703786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.926 [2024-11-20 09:56:05.703789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.926 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.184 [2024-11-20 09:56:05.845963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.184 Malloc0 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.184 [2024-11-20 09:56:05.924689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.184 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.184 [ 00:22:29.184 { 00:22:29.184 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:29.184 "subtype": "Discovery", 00:22:29.184 "listen_addresses": [], 00:22:29.184 "allow_any_host": true, 00:22:29.184 "hosts": [] 00:22:29.184 }, 00:22:29.184 { 00:22:29.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.184 "subtype": "NVMe", 00:22:29.184 "listen_addresses": [ 00:22:29.184 { 00:22:29.184 "trtype": "TCP", 00:22:29.184 "adrfam": "IPv4", 00:22:29.184 "traddr": "10.0.0.2", 00:22:29.184 "trsvcid": "4420" 00:22:29.184 } 00:22:29.184 ], 00:22:29.184 "allow_any_host": true, 00:22:29.184 "hosts": [], 00:22:29.184 "serial_number": "SPDK00000000000001", 00:22:29.184 "model_number": "SPDK bdev Controller", 00:22:29.184 "max_namespaces": 2, 00:22:29.184 "min_cntlid": 1, 00:22:29.184 "max_cntlid": 65519, 00:22:29.184 "namespaces": [ 00:22:29.184 { 00:22:29.185 "nsid": 1, 00:22:29.185 "bdev_name": "Malloc0", 00:22:29.185 "name": "Malloc0", 00:22:29.185 "nguid": "23462CE91C584AF6BE08A60472384A8F", 00:22:29.185 "uuid": "23462ce9-1c58-4af6-be08-a60472384a8f" 00:22:29.185 } 00:22:29.185 ] 00:22:29.185 } 00:22:29.185 ] 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3793852 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:29.185 09:56:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:29.185 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.185 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:29.185 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:29.185 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.443 Malloc1 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.443 [ 00:22:29.443 { 00:22:29.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:29.443 "subtype": "Discovery", 00:22:29.443 "listen_addresses": [], 00:22:29.443 "allow_any_host": true, 00:22:29.443 "hosts": [] 00:22:29.443 }, 00:22:29.443 { 00:22:29.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.443 "subtype": "NVMe", 00:22:29.443 "listen_addresses": [ 00:22:29.443 { 00:22:29.443 "trtype": "TCP", 00:22:29.443 "adrfam": "IPv4", 00:22:29.443 "traddr": "10.0.0.2", 00:22:29.443 "trsvcid": "4420" 00:22:29.443 } 00:22:29.443 ], 00:22:29.443 "allow_any_host": true, 00:22:29.443 "hosts": [], 00:22:29.443 "serial_number": "SPDK00000000000001", 00:22:29.443 "model_number": "SPDK bdev Controller", 00:22:29.443 "max_namespaces": 2, 00:22:29.443 "min_cntlid": 1, 00:22:29.443 "max_cntlid": 65519, 00:22:29.443 "namespaces": [ 00:22:29.443 { 00:22:29.443 "nsid": 1, 00:22:29.443 "bdev_name": "Malloc0", 00:22:29.443 "name": "Malloc0", 00:22:29.443 "nguid": "23462CE91C584AF6BE08A60472384A8F", 00:22:29.443 "uuid": "23462ce9-1c58-4af6-be08-a60472384a8f" 00:22:29.443 }, 00:22:29.443 { 00:22:29.443 "nsid": 2, 00:22:29.443 "bdev_name": "Malloc1", 00:22:29.443 "name": "Malloc1", 00:22:29.443 "nguid": "431B1B30E0CE4E3BAC7125780B19D4E7", 00:22:29.443 "uuid": "431b1b30-e0ce-4e3b-ac71-25780b19d4e7" 00:22:29.443 } 00:22:29.443 ] 00:22:29.443 } 00:22:29.443 ] 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3793852 00:22:29.443 Asynchronous Event Request test 00:22:29.443 Attaching to 10.0.0.2 00:22:29.443 Attached to 10.0.0.2 00:22:29.443 Registering asynchronous event callbacks... 00:22:29.443 Starting namespace attribute notice tests for all controllers... 00:22:29.443 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:29.443 aer_cb - Changed Namespace 00:22:29.443 Cleaning up... 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.443 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.444 rmmod nvme_tcp 00:22:29.444 rmmod nvme_fabrics 00:22:29.444 rmmod nvme_keyring 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3793713 ']' 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3793713 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3793713 ']' 00:22:29.444 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3793713 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3793713 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3793713' 00:22:29.702 killing process with pid 3793713 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3793713 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3793713 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.702 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.963 09:56:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.868 00:22:31.868 real 0m5.632s 00:22:31.868 user 0m4.474s 00:22:31.868 sys 0m2.020s 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.868 ************************************ 00:22:31.868 END TEST nvmf_aer 00:22:31.868 ************************************ 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.868 ************************************ 00:22:31.868 START TEST nvmf_async_init 00:22:31.868 ************************************ 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:31.868 * Looking for test storage... 00:22:31.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:31.868 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:32.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.127 --rc genhtml_branch_coverage=1 00:22:32.127 --rc genhtml_function_coverage=1 00:22:32.127 --rc genhtml_legend=1 00:22:32.127 --rc geninfo_all_blocks=1 00:22:32.127 --rc geninfo_unexecuted_blocks=1 00:22:32.127 00:22:32.127 ' 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:32.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.127 --rc genhtml_branch_coverage=1 00:22:32.127 --rc genhtml_function_coverage=1 00:22:32.127 --rc genhtml_legend=1 00:22:32.127 --rc geninfo_all_blocks=1 00:22:32.127 --rc geninfo_unexecuted_blocks=1 00:22:32.127 00:22:32.127 ' 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:32.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.127 --rc genhtml_branch_coverage=1 00:22:32.127 --rc genhtml_function_coverage=1 00:22:32.127 --rc genhtml_legend=1 00:22:32.127 --rc geninfo_all_blocks=1 00:22:32.127 --rc geninfo_unexecuted_blocks=1 00:22:32.127 00:22:32.127 ' 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:32.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.127 --rc genhtml_branch_coverage=1 00:22:32.127 --rc genhtml_function_coverage=1 00:22:32.127 --rc genhtml_legend=1 00:22:32.127 --rc geninfo_all_blocks=1 00:22:32.127 --rc geninfo_unexecuted_blocks=1 00:22:32.127 00:22:32.127 ' 00:22:32.127 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e71d2e166db54b28bc3d63b8d2be57b8 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.128 09:56:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.663 09:56:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:34.663 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:34.663 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.663 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:34.664 Found net devices under 0000:09:00.0: cvl_0_0 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:34.664 Found net devices under 0000:09:00.1: cvl_0_1 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:22:34.664 00:22:34.664 --- 10.0.0.2 ping statistics --- 00:22:34.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.664 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:22:34.664 00:22:34.664 --- 10.0.0.1 ping statistics --- 00:22:34.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.664 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3795800 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3795800 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3795800 ']' 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.664 [2024-11-20 09:56:11.224855] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:34.664 [2024-11-20 09:56:11.224941] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.664 [2024-11-20 09:56:11.295317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.664 [2024-11-20 09:56:11.351352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.664 [2024-11-20 09:56:11.351427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.664 [2024-11-20 09:56:11.351455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.664 [2024-11-20 09:56:11.351466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.664 [2024-11-20 09:56:11.351476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.664 [2024-11-20 09:56:11.352033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.664 [2024-11-20 09:56:11.484149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.664 null0 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.664 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e71d2e166db54b28bc3d63b8d2be57b8 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.665 [2024-11-20 09:56:11.524463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.665 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.923 nvme0n1 00:22:34.923 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.923 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:34.923 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.923 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.923 [ 00:22:34.923 { 00:22:34.923 "name": "nvme0n1", 00:22:34.923 "aliases": [ 00:22:34.923 "e71d2e16-6db5-4b28-bc3d-63b8d2be57b8" 00:22:34.923 ], 00:22:34.923 "product_name": "NVMe disk", 00:22:34.923 "block_size": 512, 00:22:34.923 "num_blocks": 2097152, 00:22:34.923 "uuid": "e71d2e16-6db5-4b28-bc3d-63b8d2be57b8", 00:22:34.923 "numa_id": 0, 00:22:34.923 "assigned_rate_limits": { 00:22:34.923 "rw_ios_per_sec": 0, 00:22:34.923 "rw_mbytes_per_sec": 0, 00:22:34.923 "r_mbytes_per_sec": 0, 00:22:34.923 "w_mbytes_per_sec": 0 00:22:34.923 }, 00:22:34.923 "claimed": false, 00:22:34.923 "zoned": false, 00:22:34.923 "supported_io_types": { 00:22:34.923 "read": true, 00:22:34.923 "write": true, 00:22:34.923 "unmap": false, 00:22:34.923 "flush": true, 00:22:34.923 "reset": true, 00:22:34.923 "nvme_admin": true, 00:22:34.923 "nvme_io": true, 00:22:34.923 "nvme_io_md": false, 00:22:34.923 "write_zeroes": true, 00:22:34.923 "zcopy": false, 00:22:34.923 "get_zone_info": false, 00:22:34.923 "zone_management": false, 00:22:34.923 "zone_append": false, 00:22:34.923 "compare": true, 00:22:34.923 "compare_and_write": true, 00:22:34.923 "abort": true, 00:22:34.923 "seek_hole": false, 00:22:34.923 "seek_data": false, 00:22:34.923 "copy": true, 00:22:34.923 "nvme_iov_md": false 00:22:34.923 }, 00:22:34.923 "memory_domains": [ 00:22:34.923 { 00:22:34.923 "dma_device_id": "system", 00:22:34.923 "dma_device_type": 1 00:22:34.923 } 00:22:34.923 ], 00:22:34.923 "driver_specific": { 00:22:34.923 "nvme": [ 00:22:34.923 { 00:22:34.923 "trid": { 00:22:34.923 "trtype": "TCP", 00:22:34.923 "adrfam": "IPv4", 00:22:34.923 "traddr": "10.0.0.2", 00:22:34.923 "trsvcid": "4420", 00:22:34.923 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.923 }, 00:22:34.923 "ctrlr_data": { 00:22:34.923 "cntlid": 1, 00:22:34.923 "vendor_id": "0x8086", 00:22:34.924 "model_number": "SPDK bdev Controller", 00:22:34.924 "serial_number": "00000000000000000000", 00:22:34.924 "firmware_revision": "25.01", 00:22:34.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.924 "oacs": { 00:22:34.924 "security": 0, 00:22:34.924 "format": 0, 00:22:34.924 "firmware": 0, 00:22:34.924 "ns_manage": 0 00:22:34.924 }, 00:22:34.924 "multi_ctrlr": true, 00:22:34.924 "ana_reporting": false 00:22:34.924 }, 00:22:34.924 "vs": { 00:22:34.924 "nvme_version": "1.3" 00:22:34.924 }, 00:22:34.924 "ns_data": { 00:22:34.924 "id": 1, 00:22:34.924 "can_share": true 00:22:34.924 } 00:22:34.924 } 00:22:34.924 ], 00:22:34.924 "mp_policy": "active_passive" 00:22:34.924 } 00:22:34.924 } 00:22:34.924 ] 00:22:34.924 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.924 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:34.924 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.924 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.924 [2024-11-20 09:56:11.773449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:34.924 [2024-11-20 09:56:11.773526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08da0 (9): Bad file descriptor 00:22:35.182 [2024-11-20 09:56:11.905422] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:35.182 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.182 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:35.182 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.182 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.182 [ 00:22:35.182 { 00:22:35.182 "name": "nvme0n1", 00:22:35.182 "aliases": [ 00:22:35.182 "e71d2e16-6db5-4b28-bc3d-63b8d2be57b8" 00:22:35.182 ], 00:22:35.182 "product_name": "NVMe disk", 00:22:35.182 "block_size": 512, 00:22:35.182 "num_blocks": 2097152, 00:22:35.182 "uuid": "e71d2e16-6db5-4b28-bc3d-63b8d2be57b8", 00:22:35.182 "numa_id": 0, 00:22:35.182 "assigned_rate_limits": { 00:22:35.182 "rw_ios_per_sec": 0, 00:22:35.182 "rw_mbytes_per_sec": 0, 00:22:35.182 "r_mbytes_per_sec": 0, 00:22:35.182 "w_mbytes_per_sec": 0 00:22:35.182 }, 00:22:35.182 "claimed": false, 00:22:35.182 "zoned": false, 00:22:35.182 "supported_io_types": { 00:22:35.182 "read": true, 00:22:35.182 "write": true, 00:22:35.182 "unmap": false, 00:22:35.182 "flush": true, 00:22:35.182 "reset": true, 00:22:35.183 "nvme_admin": true, 00:22:35.183 "nvme_io": true, 00:22:35.183 "nvme_io_md": false, 00:22:35.183 "write_zeroes": true, 00:22:35.183 "zcopy": false, 00:22:35.183 "get_zone_info": false, 00:22:35.183 "zone_management": false, 00:22:35.183 "zone_append": false, 00:22:35.183 "compare": true, 00:22:35.183 "compare_and_write": true, 00:22:35.183 "abort": true, 00:22:35.183 "seek_hole": false, 00:22:35.183 "seek_data": false, 00:22:35.183 "copy": true, 00:22:35.183 "nvme_iov_md": false 00:22:35.183 }, 00:22:35.183 "memory_domains": [ 00:22:35.183 { 00:22:35.183 "dma_device_id": "system", 00:22:35.183 "dma_device_type": 1 00:22:35.183 } 00:22:35.183 ], 00:22:35.183 "driver_specific": { 00:22:35.183 "nvme": [ 00:22:35.183 { 00:22:35.183 "trid": { 00:22:35.183 "trtype": "TCP", 00:22:35.183 "adrfam": "IPv4", 00:22:35.183 "traddr": "10.0.0.2", 00:22:35.183 "trsvcid": "4420", 00:22:35.183 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:35.183 }, 00:22:35.183 "ctrlr_data": { 00:22:35.183 "cntlid": 2, 00:22:35.183 "vendor_id": "0x8086", 00:22:35.183 "model_number": "SPDK bdev Controller", 00:22:35.183 "serial_number": "00000000000000000000", 00:22:35.183 "firmware_revision": "25.01", 00:22:35.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:35.183 "oacs": { 00:22:35.183 "security": 0, 00:22:35.183 "format": 0, 00:22:35.183 "firmware": 0, 00:22:35.183 "ns_manage": 0 00:22:35.183 }, 00:22:35.183 "multi_ctrlr": true, 00:22:35.183 "ana_reporting": false 00:22:35.183 }, 00:22:35.183 "vs": { 00:22:35.183 "nvme_version": "1.3" 00:22:35.183 }, 00:22:35.183 "ns_data": { 00:22:35.183 "id": 1, 00:22:35.183 "can_share": true 00:22:35.183 } 00:22:35.183 } 00:22:35.183 ], 00:22:35.183 "mp_policy": "active_passive" 00:22:35.183 } 00:22:35.183 } 00:22:35.183 ] 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.puiBnRoXzG 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.puiBnRoXzG 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.puiBnRoXzG 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.183 [2024-11-20 09:56:11.966043] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.183 [2024-11-20 09:56:11.966163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.183 09:56:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.183 [2024-11-20 09:56:11.982090] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.183 nvme0n1 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.183 [ 00:22:35.183 { 00:22:35.183 "name": "nvme0n1", 00:22:35.183 "aliases": [ 00:22:35.183 "e71d2e16-6db5-4b28-bc3d-63b8d2be57b8" 00:22:35.183 ], 00:22:35.183 "product_name": "NVMe disk", 00:22:35.183 "block_size": 512, 00:22:35.183 "num_blocks": 2097152, 00:22:35.183 "uuid": "e71d2e16-6db5-4b28-bc3d-63b8d2be57b8", 00:22:35.183 "numa_id": 0, 00:22:35.183 "assigned_rate_limits": { 00:22:35.183 "rw_ios_per_sec": 0, 00:22:35.183 "rw_mbytes_per_sec": 0, 00:22:35.183 "r_mbytes_per_sec": 0, 00:22:35.183 "w_mbytes_per_sec": 0 00:22:35.183 }, 00:22:35.183 "claimed": false, 00:22:35.183 "zoned": false, 00:22:35.183 "supported_io_types": { 00:22:35.183 "read": true, 00:22:35.183 "write": true, 00:22:35.183 "unmap": false, 00:22:35.183 "flush": true, 00:22:35.183 "reset": true, 00:22:35.183 "nvme_admin": true, 00:22:35.183 "nvme_io": true, 00:22:35.183 "nvme_io_md": false, 00:22:35.183 "write_zeroes": true, 00:22:35.183 "zcopy": false, 00:22:35.183 "get_zone_info": false, 00:22:35.183 "zone_management": false, 00:22:35.183 "zone_append": false, 00:22:35.183 "compare": true, 00:22:35.183 "compare_and_write": true, 00:22:35.183 "abort": true, 00:22:35.183 "seek_hole": false, 00:22:35.183 "seek_data": false, 00:22:35.183 "copy": true, 00:22:35.183 "nvme_iov_md": false 00:22:35.183 }, 00:22:35.183 "memory_domains": [ 00:22:35.183 { 00:22:35.183 "dma_device_id": "system", 00:22:35.183 "dma_device_type": 1 00:22:35.183 } 00:22:35.183 ], 00:22:35.183 "driver_specific": { 00:22:35.183 "nvme": [ 00:22:35.183 { 00:22:35.183 "trid": { 00:22:35.183 "trtype": "TCP", 00:22:35.183 "adrfam": "IPv4", 00:22:35.183 "traddr": "10.0.0.2", 00:22:35.183 "trsvcid": "4421", 00:22:35.183 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:35.183 }, 00:22:35.183 "ctrlr_data": { 00:22:35.183 "cntlid": 3, 00:22:35.183 "vendor_id": "0x8086", 00:22:35.183 "model_number": "SPDK bdev Controller", 00:22:35.183 "serial_number": "00000000000000000000", 00:22:35.183 "firmware_revision": "25.01", 00:22:35.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:35.183 "oacs": { 00:22:35.183 "security": 0, 00:22:35.183 "format": 0, 00:22:35.183 "firmware": 0, 00:22:35.183 "ns_manage": 0 00:22:35.183 }, 00:22:35.183 "multi_ctrlr": true, 00:22:35.183 "ana_reporting": false 00:22:35.183 }, 00:22:35.183 "vs": { 00:22:35.183 "nvme_version": "1.3" 00:22:35.183 }, 00:22:35.183 "ns_data": { 00:22:35.183 "id": 1, 00:22:35.183 "can_share": true 00:22:35.183 } 00:22:35.183 } 00:22:35.183 ], 00:22:35.183 "mp_policy": "active_passive" 00:22:35.183 } 00:22:35.183 } 00:22:35.183 ] 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.puiBnRoXzG 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.183 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.183 rmmod nvme_tcp 00:22:35.442 rmmod nvme_fabrics 00:22:35.442 rmmod nvme_keyring 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3795800 ']' 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3795800 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3795800 ']' 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3795800 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3795800 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3795800' 00:22:35.442 killing process with pid 3795800 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3795800 00:22:35.442 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3795800 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.605 09:56:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.605 00:22:37.605 real 0m5.722s 00:22:37.605 user 0m2.230s 00:22:37.605 sys 0m1.906s 00:22:37.605 09:56:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.605 09:56:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:37.605 ************************************ 00:22:37.605 END TEST nvmf_async_init 00:22:37.605 ************************************ 00:22:37.605 09:56:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:37.605 09:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.605 09:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.605 09:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.605 ************************************ 00:22:37.605 START TEST dma 00:22:37.605 ************************************ 00:22:37.605 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:37.863 * Looking for test storage... 00:22:37.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.863 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:37.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.864 --rc genhtml_branch_coverage=1 00:22:37.864 --rc genhtml_function_coverage=1 00:22:37.864 --rc genhtml_legend=1 00:22:37.864 --rc geninfo_all_blocks=1 00:22:37.864 --rc geninfo_unexecuted_blocks=1 00:22:37.864 00:22:37.864 ' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:37.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.864 --rc genhtml_branch_coverage=1 00:22:37.864 --rc genhtml_function_coverage=1 00:22:37.864 --rc genhtml_legend=1 00:22:37.864 --rc geninfo_all_blocks=1 00:22:37.864 --rc geninfo_unexecuted_blocks=1 00:22:37.864 00:22:37.864 ' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:37.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.864 --rc genhtml_branch_coverage=1 00:22:37.864 --rc genhtml_function_coverage=1 00:22:37.864 --rc genhtml_legend=1 00:22:37.864 --rc geninfo_all_blocks=1 00:22:37.864 --rc geninfo_unexecuted_blocks=1 00:22:37.864 00:22:37.864 ' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:37.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.864 --rc genhtml_branch_coverage=1 00:22:37.864 --rc genhtml_function_coverage=1 00:22:37.864 --rc genhtml_legend=1 00:22:37.864 --rc geninfo_all_blocks=1 00:22:37.864 --rc geninfo_unexecuted_blocks=1 00:22:37.864 00:22:37.864 ' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:37.864 09:56:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:37.864 00:22:37.864 real 0m0.150s 00:22:37.864 user 0m0.112s 00:22:37.864 sys 0m0.047s 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.865 ************************************ 00:22:37.865 END TEST dma 00:22:37.865 ************************************ 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.865 ************************************ 00:22:37.865 START TEST nvmf_identify 00:22:37.865 ************************************ 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:37.865 * Looking for test storage... 00:22:37.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:37.865 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.124 --rc genhtml_branch_coverage=1 00:22:38.124 --rc genhtml_function_coverage=1 00:22:38.124 --rc genhtml_legend=1 00:22:38.124 --rc geninfo_all_blocks=1 00:22:38.124 --rc geninfo_unexecuted_blocks=1 00:22:38.124 00:22:38.124 ' 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.124 --rc genhtml_branch_coverage=1 00:22:38.124 --rc genhtml_function_coverage=1 00:22:38.124 --rc genhtml_legend=1 00:22:38.124 --rc geninfo_all_blocks=1 00:22:38.124 --rc geninfo_unexecuted_blocks=1 00:22:38.124 00:22:38.124 ' 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.124 --rc genhtml_branch_coverage=1 00:22:38.124 --rc genhtml_function_coverage=1 00:22:38.124 --rc genhtml_legend=1 00:22:38.124 --rc geninfo_all_blocks=1 00:22:38.124 --rc geninfo_unexecuted_blocks=1 00:22:38.124 00:22:38.124 ' 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.124 --rc genhtml_branch_coverage=1 00:22:38.124 --rc genhtml_function_coverage=1 00:22:38.124 --rc genhtml_legend=1 00:22:38.124 --rc geninfo_all_blocks=1 00:22:38.124 --rc geninfo_unexecuted_blocks=1 00:22:38.124 00:22:38.124 ' 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.124 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.125 09:56:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:40.027 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:40.028 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:40.028 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:40.028 Found net devices under 0000:09:00.0: cvl_0_0 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:40.028 Found net devices under 0000:09:00.1: cvl_0_1 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.028 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:22:40.287 00:22:40.287 --- 10.0.0.2 ping statistics --- 00:22:40.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.287 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:22:40.287 00:22:40.287 --- 10.0.0.1 ping statistics --- 00:22:40.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.287 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:40.287 09:56:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3797943 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3797943 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3797943 ']' 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.287 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.287 [2024-11-20 09:56:17.055496] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:40.287 [2024-11-20 09:56:17.055591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.287 [2024-11-20 09:56:17.125157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.287 [2024-11-20 09:56:17.180922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.287 [2024-11-20 09:56:17.180977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.287 [2024-11-20 09:56:17.181004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.287 [2024-11-20 09:56:17.181016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.287 [2024-11-20 09:56:17.181025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.287 [2024-11-20 09:56:17.182542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.287 [2024-11-20 09:56:17.182614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.287 [2024-11-20 09:56:17.182676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.287 [2024-11-20 09:56:17.182679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.545 [2024-11-20 09:56:17.311499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.545 Malloc0 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.545 [2024-11-20 09:56:17.406100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.545 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:40.546 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.546 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.546 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.546 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:40.546 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.546 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:40.546 [ 00:22:40.546 { 00:22:40.546 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:40.546 "subtype": "Discovery", 00:22:40.546 "listen_addresses": [ 00:22:40.546 { 00:22:40.546 "trtype": "TCP", 00:22:40.546 "adrfam": "IPv4", 00:22:40.546 "traddr": "10.0.0.2", 00:22:40.546 "trsvcid": "4420" 00:22:40.546 } 00:22:40.546 ], 00:22:40.546 "allow_any_host": true, 00:22:40.546 "hosts": [] 00:22:40.546 }, 00:22:40.546 { 00:22:40.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.546 "subtype": "NVMe", 00:22:40.546 "listen_addresses": [ 00:22:40.546 { 00:22:40.546 "trtype": "TCP", 00:22:40.546 "adrfam": "IPv4", 00:22:40.546 "traddr": "10.0.0.2", 00:22:40.546 "trsvcid": "4420" 00:22:40.546 } 00:22:40.546 ], 00:22:40.546 "allow_any_host": true, 00:22:40.546 "hosts": [], 00:22:40.546 "serial_number": "SPDK00000000000001", 00:22:40.546 "model_number": "SPDK bdev Controller", 00:22:40.546 "max_namespaces": 32, 00:22:40.546 "min_cntlid": 1, 00:22:40.546 "max_cntlid": 65519, 00:22:40.546 "namespaces": [ 00:22:40.546 { 00:22:40.546 "nsid": 1, 00:22:40.546 "bdev_name": "Malloc0", 00:22:40.546 "name": "Malloc0", 00:22:40.546 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:40.546 "eui64": "ABCDEF0123456789", 00:22:40.546 "uuid": "dd4d0435-1442-407c-a9de-6bf6fa082b6a" 00:22:40.546 } 00:22:40.546 ] 00:22:40.546 } 00:22:40.546 ] 00:22:40.546 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.546 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:40.546 [2024-11-20 09:56:17.447414] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:40.546 [2024-11-20 09:56:17.447459] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798086 ] 00:22:40.806 [2024-11-20 09:56:17.499758] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:40.806 [2024-11-20 09:56:17.499829] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:40.806 [2024-11-20 09:56:17.499840] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:40.806 [2024-11-20 09:56:17.499855] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:40.806 [2024-11-20 09:56:17.499872] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:40.806 [2024-11-20 09:56:17.503774] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:40.806 [2024-11-20 09:56:17.503842] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc69690 0 00:22:40.806 [2024-11-20 09:56:17.503987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:40.806 [2024-11-20 09:56:17.504009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:40.806 [2024-11-20 09:56:17.504019] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:40.806 [2024-11-20 09:56:17.504025] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:40.806 [2024-11-20 09:56:17.504080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.806 [2024-11-20 09:56:17.504095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.806 [2024-11-20 09:56:17.504104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.806 [2024-11-20 09:56:17.504124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:40.806 [2024-11-20 09:56:17.504150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.806 [2024-11-20 09:56:17.511318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.806 [2024-11-20 09:56:17.511337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.806 [2024-11-20 09:56:17.511360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.806 [2024-11-20 09:56:17.511368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.806 [2024-11-20 09:56:17.511386] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:40.806 [2024-11-20 09:56:17.511399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:40.806 [2024-11-20 09:56:17.511410] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:40.806 [2024-11-20 09:56:17.511436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.806 [2024-11-20 09:56:17.511445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.806 [2024-11-20 09:56:17.511452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.806 [2024-11-20 09:56:17.511464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.806 [2024-11-20 09:56:17.511489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.806 [2024-11-20 09:56:17.511592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.806 [2024-11-20 09:56:17.511606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.806 [2024-11-20 09:56:17.511614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.511626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.807 [2024-11-20 09:56:17.511637] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:40.807 [2024-11-20 09:56:17.511651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:40.807 [2024-11-20 09:56:17.511664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.511672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.511679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.807 [2024-11-20 09:56:17.511689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.807 [2024-11-20 09:56:17.511711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.807 [2024-11-20 09:56:17.511804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.807 [2024-11-20 09:56:17.511817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.807 [2024-11-20 09:56:17.511824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.511831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.807 [2024-11-20 09:56:17.511842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:40.807 [2024-11-20 09:56:17.511856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:40.807 [2024-11-20 09:56:17.511869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.511876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.511883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.807 [2024-11-20 09:56:17.511893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.807 [2024-11-20 09:56:17.511914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.807 [2024-11-20 09:56:17.512003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.807 [2024-11-20 09:56:17.512016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.807 [2024-11-20 09:56:17.512023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.807 [2024-11-20 09:56:17.512039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:40.807 [2024-11-20 09:56:17.512056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.807 [2024-11-20 09:56:17.512082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.807 [2024-11-20 09:56:17.512103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.807 [2024-11-20 09:56:17.512196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.807 [2024-11-20 09:56:17.512210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.807 [2024-11-20 09:56:17.512217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.807 [2024-11-20 09:56:17.512233] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:40.807 [2024-11-20 09:56:17.512246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:40.807 [2024-11-20 09:56:17.512261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:40.807 [2024-11-20 09:56:17.512373] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:40.807 [2024-11-20 09:56:17.512384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:40.807 [2024-11-20 09:56:17.512401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.807 [2024-11-20 09:56:17.512425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.807 [2024-11-20 09:56:17.512447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.807 [2024-11-20 09:56:17.512549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.807 [2024-11-20 09:56:17.512563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.807 [2024-11-20 09:56:17.512570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.807 [2024-11-20 09:56:17.512585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:40.807 [2024-11-20 09:56:17.512602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.807 [2024-11-20 09:56:17.512628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.807 [2024-11-20 09:56:17.512649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.807 [2024-11-20 09:56:17.512734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.807 [2024-11-20 09:56:17.512747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.807 [2024-11-20 09:56:17.512754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.807 [2024-11-20 09:56:17.512768] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:40.807 [2024-11-20 09:56:17.512777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:40.807 [2024-11-20 09:56:17.512791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:40.807 [2024-11-20 09:56:17.512808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:40.807 [2024-11-20 09:56:17.512826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.512834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.807 [2024-11-20 09:56:17.512845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.807 [2024-11-20 09:56:17.512870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.807 [2024-11-20 09:56:17.513005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.807 [2024-11-20 09:56:17.513018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.807 [2024-11-20 09:56:17.513025] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513032] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc69690): datao=0, datal=4096, cccid=0 00:22:40.807 [2024-11-20 09:56:17.513040] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb100) on tqpair(0xc69690): expected_datao=0, payload_size=4096 00:22:40.807 [2024-11-20 09:56:17.513048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513060] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513069] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.807 [2024-11-20 09:56:17.513092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.807 [2024-11-20 09:56:17.513099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.807 [2024-11-20 09:56:17.513121] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:40.807 [2024-11-20 09:56:17.513130] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:40.807 [2024-11-20 09:56:17.513137] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:40.807 [2024-11-20 09:56:17.513152] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:40.807 [2024-11-20 09:56:17.513162] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:40.807 [2024-11-20 09:56:17.513170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:40.807 [2024-11-20 09:56:17.513189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:40.807 [2024-11-20 09:56:17.513204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.807 [2024-11-20 09:56:17.513229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.807 [2024-11-20 09:56:17.513251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.807 [2024-11-20 09:56:17.513358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.807 [2024-11-20 09:56:17.513372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.807 [2024-11-20 09:56:17.513379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.807 [2024-11-20 09:56:17.513399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.807 [2024-11-20 09:56:17.513413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc69690) 00:22:40.807 [2024-11-20 09:56:17.513423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.807 [2024-11-20 09:56:17.513433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.513460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.808 [2024-11-20 09:56:17.513470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.513492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.808 [2024-11-20 09:56:17.513502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.513524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.808 [2024-11-20 09:56:17.513532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:40.808 [2024-11-20 09:56:17.513548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:40.808 [2024-11-20 09:56:17.513560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.513577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.808 [2024-11-20 09:56:17.513600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb100, cid 0, qid 0 00:22:40.808 [2024-11-20 09:56:17.513611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb280, cid 1, qid 0 00:22:40.808 [2024-11-20 09:56:17.513619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb400, cid 2, qid 0 00:22:40.808 [2024-11-20 09:56:17.513627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.808 [2024-11-20 09:56:17.513634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb700, cid 4, qid 0 00:22:40.808 [2024-11-20 09:56:17.513753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.808 [2024-11-20 09:56:17.513767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.808 [2024-11-20 09:56:17.513774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb700) on tqpair=0xc69690 00:22:40.808 [2024-11-20 09:56:17.513796] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:40.808 [2024-11-20 09:56:17.513806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:40.808 [2024-11-20 09:56:17.513824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.513834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.513845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.808 [2024-11-20 09:56:17.513866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb700, cid 4, qid 0 00:22:40.808 [2024-11-20 09:56:17.513974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.808 [2024-11-20 09:56:17.513989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.808 [2024-11-20 09:56:17.514000] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.514006] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc69690): datao=0, datal=4096, cccid=4 00:22:40.808 [2024-11-20 09:56:17.514014] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb700) on tqpair(0xc69690): expected_datao=0, payload_size=4096 00:22:40.808 [2024-11-20 09:56:17.514021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.514039] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.514048] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.808 [2024-11-20 09:56:17.558333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.808 [2024-11-20 09:56:17.558341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb700) on tqpair=0xc69690 00:22:40.808 [2024-11-20 09:56:17.558385] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:40.808 [2024-11-20 09:56:17.558429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.558452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.808 [2024-11-20 09:56:17.558464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.558487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:40.808 [2024-11-20 09:56:17.558517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb700, cid 4, qid 0 00:22:40.808 [2024-11-20 09:56:17.558530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb880, cid 5, qid 0 00:22:40.808 [2024-11-20 09:56:17.558663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.808 [2024-11-20 09:56:17.558678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.808 [2024-11-20 09:56:17.558685] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558692] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc69690): datao=0, datal=1024, cccid=4 00:22:40.808 [2024-11-20 09:56:17.558700] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb700) on tqpair(0xc69690): expected_datao=0, payload_size=1024 00:22:40.808 [2024-11-20 09:56:17.558707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558717] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558724] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.808 [2024-11-20 09:56:17.558742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.808 [2024-11-20 09:56:17.558749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.558756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb880) on tqpair=0xc69690 00:22:40.808 [2024-11-20 09:56:17.599381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.808 [2024-11-20 09:56:17.599400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.808 [2024-11-20 09:56:17.599407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb700) on tqpair=0xc69690 00:22:40.808 [2024-11-20 09:56:17.599433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.599460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.808 [2024-11-20 09:56:17.599491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb700, cid 4, qid 0 00:22:40.808 [2024-11-20 09:56:17.599608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.808 [2024-11-20 09:56:17.599620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.808 [2024-11-20 09:56:17.599627] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599634] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc69690): datao=0, datal=3072, cccid=4 00:22:40.808 [2024-11-20 09:56:17.599641] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb700) on tqpair(0xc69690): expected_datao=0, payload_size=3072 00:22:40.808 [2024-11-20 09:56:17.599649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599660] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599668] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.808 [2024-11-20 09:56:17.599689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.808 [2024-11-20 09:56:17.599696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb700) on tqpair=0xc69690 00:22:40.808 [2024-11-20 09:56:17.599718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc69690) 00:22:40.808 [2024-11-20 09:56:17.599738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.808 [2024-11-20 09:56:17.599766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb700, cid 4, qid 0 00:22:40.808 [2024-11-20 09:56:17.599873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:40.808 [2024-11-20 09:56:17.599887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:40.808 [2024-11-20 09:56:17.599894] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599901] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc69690): datao=0, datal=8, cccid=4 00:22:40.808 [2024-11-20 09:56:17.599908] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb700) on tqpair(0xc69690): expected_datao=0, payload_size=8 00:22:40.808 [2024-11-20 09:56:17.599916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599925] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.599933] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.645328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.808 [2024-11-20 09:56:17.645346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.808 [2024-11-20 09:56:17.645354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.808 [2024-11-20 09:56:17.645361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb700) on tqpair=0xc69690 00:22:40.808 ===================================================== 00:22:40.809 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:40.809 ===================================================== 00:22:40.809 Controller Capabilities/Features 00:22:40.809 ================================ 00:22:40.809 Vendor ID: 0000 00:22:40.809 Subsystem Vendor ID: 0000 00:22:40.809 Serial Number: .................... 00:22:40.809 Model Number: ........................................ 00:22:40.809 Firmware Version: 25.01 00:22:40.809 Recommended Arb Burst: 0 00:22:40.809 IEEE OUI Identifier: 00 00 00 00:22:40.809 Multi-path I/O 00:22:40.809 May have multiple subsystem ports: No 00:22:40.809 May have multiple controllers: No 00:22:40.809 Associated with SR-IOV VF: No 00:22:40.809 Max Data Transfer Size: 131072 00:22:40.809 Max Number of Namespaces: 0 00:22:40.809 Max Number of I/O Queues: 1024 00:22:40.809 NVMe Specification Version (VS): 1.3 00:22:40.809 NVMe Specification Version (Identify): 1.3 00:22:40.809 Maximum Queue Entries: 128 00:22:40.809 Contiguous Queues Required: Yes 00:22:40.809 Arbitration Mechanisms Supported 00:22:40.809 Weighted Round Robin: Not Supported 00:22:40.809 Vendor Specific: Not Supported 00:22:40.809 Reset Timeout: 15000 ms 00:22:40.809 Doorbell Stride: 4 bytes 00:22:40.809 NVM Subsystem Reset: Not Supported 00:22:40.809 Command Sets Supported 00:22:40.809 NVM Command Set: Supported 00:22:40.809 Boot Partition: Not Supported 00:22:40.809 Memory Page Size Minimum: 4096 bytes 00:22:40.809 Memory Page Size Maximum: 4096 bytes 00:22:40.809 Persistent Memory Region: Not Supported 00:22:40.809 Optional Asynchronous Events Supported 00:22:40.809 Namespace Attribute Notices: Not Supported 00:22:40.809 Firmware Activation Notices: Not Supported 00:22:40.809 ANA Change Notices: Not Supported 00:22:40.809 PLE Aggregate Log Change Notices: Not Supported 00:22:40.809 LBA Status Info Alert Notices: Not Supported 00:22:40.809 EGE Aggregate Log Change Notices: Not Supported 00:22:40.809 Normal NVM Subsystem Shutdown event: Not Supported 00:22:40.809 Zone Descriptor Change Notices: Not Supported 00:22:40.809 Discovery Log Change Notices: Supported 00:22:40.809 Controller Attributes 00:22:40.809 128-bit Host Identifier: Not Supported 00:22:40.809 Non-Operational Permissive Mode: Not Supported 00:22:40.809 NVM Sets: Not Supported 00:22:40.809 Read Recovery Levels: Not Supported 00:22:40.809 Endurance Groups: Not Supported 00:22:40.809 Predictable Latency Mode: Not Supported 00:22:40.809 Traffic Based Keep ALive: Not Supported 00:22:40.809 Namespace Granularity: Not Supported 00:22:40.809 SQ Associations: Not Supported 00:22:40.809 UUID List: Not Supported 00:22:40.809 Multi-Domain Subsystem: Not Supported 00:22:40.809 Fixed Capacity Management: Not Supported 00:22:40.809 Variable Capacity Management: Not Supported 00:22:40.809 Delete Endurance Group: Not Supported 00:22:40.809 Delete NVM Set: Not Supported 00:22:40.809 Extended LBA Formats Supported: Not Supported 00:22:40.809 Flexible Data Placement Supported: Not Supported 00:22:40.809 00:22:40.809 Controller Memory Buffer Support 00:22:40.809 ================================ 00:22:40.809 Supported: No 00:22:40.809 00:22:40.809 Persistent Memory Region Support 00:22:40.809 ================================ 00:22:40.809 Supported: No 00:22:40.809 00:22:40.809 Admin Command Set Attributes 00:22:40.809 ============================ 00:22:40.809 Security Send/Receive: Not Supported 00:22:40.809 Format NVM: Not Supported 00:22:40.809 Firmware Activate/Download: Not Supported 00:22:40.809 Namespace Management: Not Supported 00:22:40.809 Device Self-Test: Not Supported 00:22:40.809 Directives: Not Supported 00:22:40.809 NVMe-MI: Not Supported 00:22:40.809 Virtualization Management: Not Supported 00:22:40.809 Doorbell Buffer Config: Not Supported 00:22:40.809 Get LBA Status Capability: Not Supported 00:22:40.809 Command & Feature Lockdown Capability: Not Supported 00:22:40.809 Abort Command Limit: 1 00:22:40.809 Async Event Request Limit: 4 00:22:40.809 Number of Firmware Slots: N/A 00:22:40.809 Firmware Slot 1 Read-Only: N/A 00:22:40.809 Firmware Activation Without Reset: N/A 00:22:40.809 Multiple Update Detection Support: N/A 00:22:40.809 Firmware Update Granularity: No Information Provided 00:22:40.809 Per-Namespace SMART Log: No 00:22:40.809 Asymmetric Namespace Access Log Page: Not Supported 00:22:40.809 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:40.809 Command Effects Log Page: Not Supported 00:22:40.809 Get Log Page Extended Data: Supported 00:22:40.809 Telemetry Log Pages: Not Supported 00:22:40.809 Persistent Event Log Pages: Not Supported 00:22:40.809 Supported Log Pages Log Page: May Support 00:22:40.809 Commands Supported & Effects Log Page: Not Supported 00:22:40.809 Feature Identifiers & Effects Log Page:May Support 00:22:40.809 NVMe-MI Commands & Effects Log Page: May Support 00:22:40.809 Data Area 4 for Telemetry Log: Not Supported 00:22:40.809 Error Log Page Entries Supported: 128 00:22:40.809 Keep Alive: Not Supported 00:22:40.809 00:22:40.809 NVM Command Set Attributes 00:22:40.809 ========================== 00:22:40.809 Submission Queue Entry Size 00:22:40.809 Max: 1 00:22:40.809 Min: 1 00:22:40.809 Completion Queue Entry Size 00:22:40.809 Max: 1 00:22:40.809 Min: 1 00:22:40.809 Number of Namespaces: 0 00:22:40.809 Compare Command: Not Supported 00:22:40.809 Write Uncorrectable Command: Not Supported 00:22:40.809 Dataset Management Command: Not Supported 00:22:40.809 Write Zeroes Command: Not Supported 00:22:40.809 Set Features Save Field: Not Supported 00:22:40.809 Reservations: Not Supported 00:22:40.809 Timestamp: Not Supported 00:22:40.809 Copy: Not Supported 00:22:40.809 Volatile Write Cache: Not Present 00:22:40.809 Atomic Write Unit (Normal): 1 00:22:40.809 Atomic Write Unit (PFail): 1 00:22:40.809 Atomic Compare & Write Unit: 1 00:22:40.809 Fused Compare & Write: Supported 00:22:40.809 Scatter-Gather List 00:22:40.809 SGL Command Set: Supported 00:22:40.809 SGL Keyed: Supported 00:22:40.809 SGL Bit Bucket Descriptor: Not Supported 00:22:40.809 SGL Metadata Pointer: Not Supported 00:22:40.809 Oversized SGL: Not Supported 00:22:40.809 SGL Metadata Address: Not Supported 00:22:40.809 SGL Offset: Supported 00:22:40.809 Transport SGL Data Block: Not Supported 00:22:40.809 Replay Protected Memory Block: Not Supported 00:22:40.809 00:22:40.809 Firmware Slot Information 00:22:40.809 ========================= 00:22:40.809 Active slot: 0 00:22:40.809 00:22:40.809 00:22:40.809 Error Log 00:22:40.809 ========= 00:22:40.809 00:22:40.809 Active Namespaces 00:22:40.809 ================= 00:22:40.809 Discovery Log Page 00:22:40.809 ================== 00:22:40.809 Generation Counter: 2 00:22:40.809 Number of Records: 2 00:22:40.809 Record Format: 0 00:22:40.809 00:22:40.809 Discovery Log Entry 0 00:22:40.809 ---------------------- 00:22:40.809 Transport Type: 3 (TCP) 00:22:40.809 Address Family: 1 (IPv4) 00:22:40.809 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:40.809 Entry Flags: 00:22:40.809 Duplicate Returned Information: 1 00:22:40.809 Explicit Persistent Connection Support for Discovery: 1 00:22:40.809 Transport Requirements: 00:22:40.809 Secure Channel: Not Required 00:22:40.809 Port ID: 0 (0x0000) 00:22:40.809 Controller ID: 65535 (0xffff) 00:22:40.809 Admin Max SQ Size: 128 00:22:40.809 Transport Service Identifier: 4420 00:22:40.809 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:40.809 Transport Address: 10.0.0.2 00:22:40.809 Discovery Log Entry 1 00:22:40.809 ---------------------- 00:22:40.809 Transport Type: 3 (TCP) 00:22:40.809 Address Family: 1 (IPv4) 00:22:40.809 Subsystem Type: 2 (NVM Subsystem) 00:22:40.809 Entry Flags: 00:22:40.809 Duplicate Returned Information: 0 00:22:40.809 Explicit Persistent Connection Support for Discovery: 0 00:22:40.809 Transport Requirements: 00:22:40.809 Secure Channel: Not Required 00:22:40.809 Port ID: 0 (0x0000) 00:22:40.809 Controller ID: 65535 (0xffff) 00:22:40.809 Admin Max SQ Size: 128 00:22:40.809 Transport Service Identifier: 4420 00:22:40.809 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:40.809 Transport Address: 10.0.0.2 [2024-11-20 09:56:17.645480] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:40.809 [2024-11-20 09:56:17.645504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb100) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.645518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.810 [2024-11-20 09:56:17.645527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb280) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.645539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.810 [2024-11-20 09:56:17.645547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb400) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.645555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.810 [2024-11-20 09:56:17.645563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.645571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.810 [2024-11-20 09:56:17.645589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.645598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.645605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.645616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.645657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.645758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.645771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.645778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.645785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.645798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.645805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.645812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.645822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.645849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.645961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.645975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.645983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.645990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.645999] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:40.810 [2024-11-20 09:56:17.646007] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:40.810 [2024-11-20 09:56:17.646023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.646049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.646070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.646156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.646169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.646176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.646200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.646231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.646252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.646343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.646357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.646365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.646388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.646414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.646435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.646524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.646537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.646544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.646567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.646593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.646614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.646708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.646722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.646729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.646752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.646779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.646799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.646883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.646896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.646903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.646925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.646942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.646956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.646978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.647068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.647080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.647087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.647110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.647136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.647157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.647251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.647265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.647272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.647295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.647330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.647351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.647443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.647455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.810 [2024-11-20 09:56:17.647463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.810 [2024-11-20 09:56:17.647485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.810 [2024-11-20 09:56:17.647501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.810 [2024-11-20 09:56:17.647511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.810 [2024-11-20 09:56:17.647532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.810 [2024-11-20 09:56:17.647625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.810 [2024-11-20 09:56:17.647639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.811 [2024-11-20 09:56:17.647646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.647653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.811 [2024-11-20 09:56:17.647669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.647678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.647685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.811 [2024-11-20 09:56:17.647695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.811 [2024-11-20 09:56:17.647722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.811 [2024-11-20 09:56:17.647817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.811 [2024-11-20 09:56:17.647830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.811 [2024-11-20 09:56:17.647837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.647844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.811 [2024-11-20 09:56:17.647860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.647869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.647876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.811 [2024-11-20 09:56:17.647886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.811 [2024-11-20 09:56:17.647907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.811 [2024-11-20 09:56:17.647997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.811 [2024-11-20 09:56:17.648009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.811 [2024-11-20 09:56:17.648016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.648023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.811 [2024-11-20 09:56:17.648038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.648048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.648055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.811 [2024-11-20 09:56:17.648065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.811 [2024-11-20 09:56:17.648086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.811 [2024-11-20 09:56:17.651314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.811 [2024-11-20 09:56:17.651331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.811 [2024-11-20 09:56:17.651339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.651345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.811 [2024-11-20 09:56:17.651364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.651374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.651381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc69690) 00:22:40.811 [2024-11-20 09:56:17.651392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.811 [2024-11-20 09:56:17.651415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb580, cid 3, qid 0 00:22:40.811 [2024-11-20 09:56:17.651506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:40.811 [2024-11-20 09:56:17.651518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:40.811 [2024-11-20 09:56:17.651525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:40.811 [2024-11-20 09:56:17.651532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb580) on tqpair=0xc69690 00:22:40.811 [2024-11-20 09:56:17.651545] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:22:40.811 00:22:40.811 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:40.811 [2024-11-20 09:56:17.687684] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:40.811 [2024-11-20 09:56:17.687728] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798088 ] 00:22:41.074 [2024-11-20 09:56:17.736021] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:41.074 [2024-11-20 09:56:17.736076] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:41.074 [2024-11-20 09:56:17.736087] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:41.074 [2024-11-20 09:56:17.736105] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:41.074 [2024-11-20 09:56:17.736118] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:41.074 [2024-11-20 09:56:17.743581] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:41.074 [2024-11-20 09:56:17.743621] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14aa690 0 00:22:41.074 [2024-11-20 09:56:17.743770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:41.074 [2024-11-20 09:56:17.743786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:41.074 [2024-11-20 09:56:17.743794] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:41.074 [2024-11-20 09:56:17.743801] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:41.074 [2024-11-20 09:56:17.743834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.743846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.743853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.074 [2024-11-20 09:56:17.743867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:41.074 [2024-11-20 09:56:17.743893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.074 [2024-11-20 09:56:17.750318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.074 [2024-11-20 09:56:17.750346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.074 [2024-11-20 09:56:17.750355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.074 [2024-11-20 09:56:17.750381] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:41.074 [2024-11-20 09:56:17.750393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:41.074 [2024-11-20 09:56:17.750403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:41.074 [2024-11-20 09:56:17.750422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.074 [2024-11-20 09:56:17.750449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.074 [2024-11-20 09:56:17.750474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.074 [2024-11-20 09:56:17.750562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.074 [2024-11-20 09:56:17.750577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.074 [2024-11-20 09:56:17.750589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.074 [2024-11-20 09:56:17.750605] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:41.074 [2024-11-20 09:56:17.750619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:41.074 [2024-11-20 09:56:17.750631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.074 [2024-11-20 09:56:17.750656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.074 [2024-11-20 09:56:17.750678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.074 [2024-11-20 09:56:17.750754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.074 [2024-11-20 09:56:17.750766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.074 [2024-11-20 09:56:17.750773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.074 [2024-11-20 09:56:17.750789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:41.074 [2024-11-20 09:56:17.750802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:41.074 [2024-11-20 09:56:17.750815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.074 [2024-11-20 09:56:17.750839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.074 [2024-11-20 09:56:17.750860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.074 [2024-11-20 09:56:17.750939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.074 [2024-11-20 09:56:17.750952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.074 [2024-11-20 09:56:17.750960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.750966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.074 [2024-11-20 09:56:17.750975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:41.074 [2024-11-20 09:56:17.750992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.074 [2024-11-20 09:56:17.751018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.074 [2024-11-20 09:56:17.751040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.074 [2024-11-20 09:56:17.751137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.074 [2024-11-20 09:56:17.751150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.074 [2024-11-20 09:56:17.751157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.074 [2024-11-20 09:56:17.751171] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:41.074 [2024-11-20 09:56:17.751184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:41.074 [2024-11-20 09:56:17.751198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:41.074 [2024-11-20 09:56:17.751310] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:41.074 [2024-11-20 09:56:17.751321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:41.074 [2024-11-20 09:56:17.751333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.074 [2024-11-20 09:56:17.751358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.074 [2024-11-20 09:56:17.751380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.074 [2024-11-20 09:56:17.751465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.074 [2024-11-20 09:56:17.751479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.074 [2024-11-20 09:56:17.751486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.074 [2024-11-20 09:56:17.751502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:41.074 [2024-11-20 09:56:17.751518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.074 [2024-11-20 09:56:17.751544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.074 [2024-11-20 09:56:17.751565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.074 [2024-11-20 09:56:17.751657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.074 [2024-11-20 09:56:17.751669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.074 [2024-11-20 09:56:17.751676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.074 [2024-11-20 09:56:17.751683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.074 [2024-11-20 09:56:17.751690] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:41.075 [2024-11-20 09:56:17.751699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.751712] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:41.075 [2024-11-20 09:56:17.751726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.751740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.751748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.751759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.075 [2024-11-20 09:56:17.751781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.075 [2024-11-20 09:56:17.751897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:41.075 [2024-11-20 09:56:17.751912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:41.075 [2024-11-20 09:56:17.751920] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.751926] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14aa690): datao=0, datal=4096, cccid=0 00:22:41.075 [2024-11-20 09:56:17.751934] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x150c100) on tqpair(0x14aa690): expected_datao=0, payload_size=4096 00:22:41.075 [2024-11-20 09:56:17.751942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.751952] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.751960] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.751972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.075 [2024-11-20 09:56:17.751982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.075 [2024-11-20 09:56:17.751988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.751995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.075 [2024-11-20 09:56:17.752006] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:41.075 [2024-11-20 09:56:17.752015] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:41.075 [2024-11-20 09:56:17.752022] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:41.075 [2024-11-20 09:56:17.752034] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:41.075 [2024-11-20 09:56:17.752042] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:41.075 [2024-11-20 09:56:17.752051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.752109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.075 [2024-11-20 09:56:17.752131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.075 [2024-11-20 09:56:17.752205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.075 [2024-11-20 09:56:17.752217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.075 [2024-11-20 09:56:17.752224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.075 [2024-11-20 09:56:17.752241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.752265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.075 [2024-11-20 09:56:17.752275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.752307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.075 [2024-11-20 09:56:17.752320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.752343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.075 [2024-11-20 09:56:17.752353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.752374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.075 [2024-11-20 09:56:17.752384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.752427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.075 [2024-11-20 09:56:17.752450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c100, cid 0, qid 0 00:22:41.075 [2024-11-20 09:56:17.752461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c280, cid 1, qid 0 00:22:41.075 [2024-11-20 09:56:17.752469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c400, cid 2, qid 0 00:22:41.075 [2024-11-20 09:56:17.752477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.075 [2024-11-20 09:56:17.752484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c700, cid 4, qid 0 00:22:41.075 [2024-11-20 09:56:17.752609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.075 [2024-11-20 09:56:17.752624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.075 [2024-11-20 09:56:17.752631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c700) on tqpair=0x14aa690 00:22:41.075 [2024-11-20 09:56:17.752650] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:41.075 [2024-11-20 09:56:17.752660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.752722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.075 [2024-11-20 09:56:17.752748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c700, cid 4, qid 0 00:22:41.075 [2024-11-20 09:56:17.752825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.075 [2024-11-20 09:56:17.752839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.075 [2024-11-20 09:56:17.752846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c700) on tqpair=0x14aa690 00:22:41.075 [2024-11-20 09:56:17.752924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:41.075 [2024-11-20 09:56:17.752961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.752969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14aa690) 00:22:41.075 [2024-11-20 09:56:17.752980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.075 [2024-11-20 09:56:17.753001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c700, cid 4, qid 0 00:22:41.075 [2024-11-20 09:56:17.753138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:41.075 [2024-11-20 09:56:17.753153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:41.075 [2024-11-20 09:56:17.753160] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.753166] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14aa690): datao=0, datal=4096, cccid=4 00:22:41.075 [2024-11-20 09:56:17.753174] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x150c700) on tqpair(0x14aa690): expected_datao=0, payload_size=4096 00:22:41.075 [2024-11-20 09:56:17.753181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.753191] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.753199] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.753211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.075 [2024-11-20 09:56:17.753221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.075 [2024-11-20 09:56:17.753228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.075 [2024-11-20 09:56:17.753235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c700) on tqpair=0x14aa690 00:22:41.075 [2024-11-20 09:56:17.753253] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:41.075 [2024-11-20 09:56:17.753277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.753339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.753361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c700, cid 4, qid 0 00:22:41.076 [2024-11-20 09:56:17.753496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:41.076 [2024-11-20 09:56:17.753511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:41.076 [2024-11-20 09:56:17.753518] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753525] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14aa690): datao=0, datal=4096, cccid=4 00:22:41.076 [2024-11-20 09:56:17.753532] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x150c700) on tqpair(0x14aa690): expected_datao=0, payload_size=4096 00:22:41.076 [2024-11-20 09:56:17.753544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753555] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753562] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.076 [2024-11-20 09:56:17.753584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.076 [2024-11-20 09:56:17.753591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c700) on tqpair=0x14aa690 00:22:41.076 [2024-11-20 09:56:17.753623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.753676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.753698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c700, cid 4, qid 0 00:22:41.076 [2024-11-20 09:56:17.753828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:41.076 [2024-11-20 09:56:17.753841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:41.076 [2024-11-20 09:56:17.753847] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753854] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14aa690): datao=0, datal=4096, cccid=4 00:22:41.076 [2024-11-20 09:56:17.753861] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x150c700) on tqpair(0x14aa690): expected_datao=0, payload_size=4096 00:22:41.076 [2024-11-20 09:56:17.753869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753878] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753886] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.076 [2024-11-20 09:56:17.753908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.076 [2024-11-20 09:56:17.753915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.753922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c700) on tqpair=0x14aa690 00:22:41.076 [2024-11-20 09:56:17.753936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.753998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.754007] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:41.076 [2024-11-20 09:56:17.754019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:41.076 [2024-11-20 09:56:17.754028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:41.076 [2024-11-20 09:56:17.754047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.754056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.754066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.754077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.754085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.754091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.754100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.076 [2024-11-20 09:56:17.754125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c700, cid 4, qid 0 00:22:41.076 [2024-11-20 09:56:17.754153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c880, cid 5, qid 0 00:22:41.076 [2024-11-20 09:56:17.754274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.076 [2024-11-20 09:56:17.754287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.076 [2024-11-20 09:56:17.754294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c700) on tqpair=0x14aa690 00:22:41.076 [2024-11-20 09:56:17.758325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.076 [2024-11-20 09:56:17.758336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.076 [2024-11-20 09:56:17.758343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c880) on tqpair=0x14aa690 00:22:41.076 [2024-11-20 09:56:17.758382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.758403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.758425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c880, cid 5, qid 0 00:22:41.076 [2024-11-20 09:56:17.758552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.076 [2024-11-20 09:56:17.758565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.076 [2024-11-20 09:56:17.758572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c880) on tqpair=0x14aa690 00:22:41.076 [2024-11-20 09:56:17.758594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.758613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.758634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c880, cid 5, qid 0 00:22:41.076 [2024-11-20 09:56:17.758711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.076 [2024-11-20 09:56:17.758725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.076 [2024-11-20 09:56:17.758732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c880) on tqpair=0x14aa690 00:22:41.076 [2024-11-20 09:56:17.758760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.758781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.758801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c880, cid 5, qid 0 00:22:41.076 [2024-11-20 09:56:17.758881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.076 [2024-11-20 09:56:17.758895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.076 [2024-11-20 09:56:17.758902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c880) on tqpair=0x14aa690 00:22:41.076 [2024-11-20 09:56:17.758934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.758955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.758968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.758975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.758985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.758997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.759004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.759014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.759026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.076 [2024-11-20 09:56:17.759033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14aa690) 00:22:41.076 [2024-11-20 09:56:17.759043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.076 [2024-11-20 09:56:17.759065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c880, cid 5, qid 0 00:22:41.076 [2024-11-20 09:56:17.759076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c700, cid 4, qid 0 00:22:41.077 [2024-11-20 09:56:17.759084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150ca00, cid 6, qid 0 00:22:41.077 [2024-11-20 09:56:17.759092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150cb80, cid 7, qid 0 00:22:41.077 [2024-11-20 09:56:17.759256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:41.077 [2024-11-20 09:56:17.759271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:41.077 [2024-11-20 09:56:17.759278] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14aa690): datao=0, datal=8192, cccid=5 00:22:41.077 [2024-11-20 09:56:17.759292] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x150c880) on tqpair(0x14aa690): expected_datao=0, payload_size=8192 00:22:41.077 [2024-11-20 09:56:17.759299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759335] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759344] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:41.077 [2024-11-20 09:56:17.759371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:41.077 [2024-11-20 09:56:17.759379] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759385] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14aa690): datao=0, datal=512, cccid=4 00:22:41.077 [2024-11-20 09:56:17.759393] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x150c700) on tqpair(0x14aa690): expected_datao=0, payload_size=512 00:22:41.077 [2024-11-20 09:56:17.759400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759409] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759416] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:41.077 [2024-11-20 09:56:17.759434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:41.077 [2024-11-20 09:56:17.759440] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759447] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14aa690): datao=0, datal=512, cccid=6 00:22:41.077 [2024-11-20 09:56:17.759454] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x150ca00) on tqpair(0x14aa690): expected_datao=0, payload_size=512 00:22:41.077 [2024-11-20 09:56:17.759462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759471] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759478] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:41.077 [2024-11-20 09:56:17.759495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:41.077 [2024-11-20 09:56:17.759502] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759508] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14aa690): datao=0, datal=4096, cccid=7 00:22:41.077 [2024-11-20 09:56:17.759515] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x150cb80) on tqpair(0x14aa690): expected_datao=0, payload_size=4096 00:22:41.077 [2024-11-20 09:56:17.759523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759533] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759540] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.077 [2024-11-20 09:56:17.759561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.077 [2024-11-20 09:56:17.759568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c880) on tqpair=0x14aa690 00:22:41.077 [2024-11-20 09:56:17.759596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.077 [2024-11-20 09:56:17.759608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.077 [2024-11-20 09:56:17.759615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c700) on tqpair=0x14aa690 00:22:41.077 [2024-11-20 09:56:17.759637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.077 [2024-11-20 09:56:17.759648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.077 [2024-11-20 09:56:17.759655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150ca00) on tqpair=0x14aa690 00:22:41.077 [2024-11-20 09:56:17.759672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.077 [2024-11-20 09:56:17.759682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.077 [2024-11-20 09:56:17.759689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.077 [2024-11-20 09:56:17.759695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150cb80) on tqpair=0x14aa690 00:22:41.077 ===================================================== 00:22:41.077 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.077 ===================================================== 00:22:41.077 Controller Capabilities/Features 00:22:41.077 ================================ 00:22:41.077 Vendor ID: 8086 00:22:41.077 Subsystem Vendor ID: 8086 00:22:41.077 Serial Number: SPDK00000000000001 00:22:41.077 Model Number: SPDK bdev Controller 00:22:41.077 Firmware Version: 25.01 00:22:41.077 Recommended Arb Burst: 6 00:22:41.077 IEEE OUI Identifier: e4 d2 5c 00:22:41.077 Multi-path I/O 00:22:41.077 May have multiple subsystem ports: Yes 00:22:41.077 May have multiple controllers: Yes 00:22:41.077 Associated with SR-IOV VF: No 00:22:41.077 Max Data Transfer Size: 131072 00:22:41.077 Max Number of Namespaces: 32 00:22:41.077 Max Number of I/O Queues: 127 00:22:41.077 NVMe Specification Version (VS): 1.3 00:22:41.077 NVMe Specification Version (Identify): 1.3 00:22:41.077 Maximum Queue Entries: 128 00:22:41.077 Contiguous Queues Required: Yes 00:22:41.077 Arbitration Mechanisms Supported 00:22:41.077 Weighted Round Robin: Not Supported 00:22:41.077 Vendor Specific: Not Supported 00:22:41.077 Reset Timeout: 15000 ms 00:22:41.077 Doorbell Stride: 4 bytes 00:22:41.077 NVM Subsystem Reset: Not Supported 00:22:41.077 Command Sets Supported 00:22:41.077 NVM Command Set: Supported 00:22:41.077 Boot Partition: Not Supported 00:22:41.077 Memory Page Size Minimum: 4096 bytes 00:22:41.077 Memory Page Size Maximum: 4096 bytes 00:22:41.077 Persistent Memory Region: Not Supported 00:22:41.077 Optional Asynchronous Events Supported 00:22:41.077 Namespace Attribute Notices: Supported 00:22:41.077 Firmware Activation Notices: Not Supported 00:22:41.077 ANA Change Notices: Not Supported 00:22:41.077 PLE Aggregate Log Change Notices: Not Supported 00:22:41.077 LBA Status Info Alert Notices: Not Supported 00:22:41.077 EGE Aggregate Log Change Notices: Not Supported 00:22:41.077 Normal NVM Subsystem Shutdown event: Not Supported 00:22:41.077 Zone Descriptor Change Notices: Not Supported 00:22:41.077 Discovery Log Change Notices: Not Supported 00:22:41.077 Controller Attributes 00:22:41.077 128-bit Host Identifier: Supported 00:22:41.077 Non-Operational Permissive Mode: Not Supported 00:22:41.077 NVM Sets: Not Supported 00:22:41.077 Read Recovery Levels: Not Supported 00:22:41.077 Endurance Groups: Not Supported 00:22:41.077 Predictable Latency Mode: Not Supported 00:22:41.077 Traffic Based Keep ALive: Not Supported 00:22:41.077 Namespace Granularity: Not Supported 00:22:41.077 SQ Associations: Not Supported 00:22:41.077 UUID List: Not Supported 00:22:41.077 Multi-Domain Subsystem: Not Supported 00:22:41.077 Fixed Capacity Management: Not Supported 00:22:41.077 Variable Capacity Management: Not Supported 00:22:41.077 Delete Endurance Group: Not Supported 00:22:41.077 Delete NVM Set: Not Supported 00:22:41.077 Extended LBA Formats Supported: Not Supported 00:22:41.077 Flexible Data Placement Supported: Not Supported 00:22:41.077 00:22:41.077 Controller Memory Buffer Support 00:22:41.077 ================================ 00:22:41.077 Supported: No 00:22:41.077 00:22:41.077 Persistent Memory Region Support 00:22:41.077 ================================ 00:22:41.077 Supported: No 00:22:41.077 00:22:41.077 Admin Command Set Attributes 00:22:41.077 ============================ 00:22:41.077 Security Send/Receive: Not Supported 00:22:41.077 Format NVM: Not Supported 00:22:41.077 Firmware Activate/Download: Not Supported 00:22:41.077 Namespace Management: Not Supported 00:22:41.077 Device Self-Test: Not Supported 00:22:41.077 Directives: Not Supported 00:22:41.077 NVMe-MI: Not Supported 00:22:41.077 Virtualization Management: Not Supported 00:22:41.077 Doorbell Buffer Config: Not Supported 00:22:41.077 Get LBA Status Capability: Not Supported 00:22:41.077 Command & Feature Lockdown Capability: Not Supported 00:22:41.077 Abort Command Limit: 4 00:22:41.077 Async Event Request Limit: 4 00:22:41.077 Number of Firmware Slots: N/A 00:22:41.077 Firmware Slot 1 Read-Only: N/A 00:22:41.077 Firmware Activation Without Reset: N/A 00:22:41.077 Multiple Update Detection Support: N/A 00:22:41.077 Firmware Update Granularity: No Information Provided 00:22:41.077 Per-Namespace SMART Log: No 00:22:41.077 Asymmetric Namespace Access Log Page: Not Supported 00:22:41.077 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:41.077 Command Effects Log Page: Supported 00:22:41.077 Get Log Page Extended Data: Supported 00:22:41.077 Telemetry Log Pages: Not Supported 00:22:41.077 Persistent Event Log Pages: Not Supported 00:22:41.077 Supported Log Pages Log Page: May Support 00:22:41.077 Commands Supported & Effects Log Page: Not Supported 00:22:41.077 Feature Identifiers & Effects Log Page:May Support 00:22:41.078 NVMe-MI Commands & Effects Log Page: May Support 00:22:41.078 Data Area 4 for Telemetry Log: Not Supported 00:22:41.078 Error Log Page Entries Supported: 128 00:22:41.078 Keep Alive: Supported 00:22:41.078 Keep Alive Granularity: 10000 ms 00:22:41.078 00:22:41.078 NVM Command Set Attributes 00:22:41.078 ========================== 00:22:41.078 Submission Queue Entry Size 00:22:41.078 Max: 64 00:22:41.078 Min: 64 00:22:41.078 Completion Queue Entry Size 00:22:41.078 Max: 16 00:22:41.078 Min: 16 00:22:41.078 Number of Namespaces: 32 00:22:41.078 Compare Command: Supported 00:22:41.078 Write Uncorrectable Command: Not Supported 00:22:41.078 Dataset Management Command: Supported 00:22:41.078 Write Zeroes Command: Supported 00:22:41.078 Set Features Save Field: Not Supported 00:22:41.078 Reservations: Supported 00:22:41.078 Timestamp: Not Supported 00:22:41.078 Copy: Supported 00:22:41.078 Volatile Write Cache: Present 00:22:41.078 Atomic Write Unit (Normal): 1 00:22:41.078 Atomic Write Unit (PFail): 1 00:22:41.078 Atomic Compare & Write Unit: 1 00:22:41.078 Fused Compare & Write: Supported 00:22:41.078 Scatter-Gather List 00:22:41.078 SGL Command Set: Supported 00:22:41.078 SGL Keyed: Supported 00:22:41.078 SGL Bit Bucket Descriptor: Not Supported 00:22:41.078 SGL Metadata Pointer: Not Supported 00:22:41.078 Oversized SGL: Not Supported 00:22:41.078 SGL Metadata Address: Not Supported 00:22:41.078 SGL Offset: Supported 00:22:41.078 Transport SGL Data Block: Not Supported 00:22:41.078 Replay Protected Memory Block: Not Supported 00:22:41.078 00:22:41.078 Firmware Slot Information 00:22:41.078 ========================= 00:22:41.078 Active slot: 1 00:22:41.078 Slot 1 Firmware Revision: 25.01 00:22:41.078 00:22:41.078 00:22:41.078 Commands Supported and Effects 00:22:41.078 ============================== 00:22:41.078 Admin Commands 00:22:41.078 -------------- 00:22:41.078 Get Log Page (02h): Supported 00:22:41.078 Identify (06h): Supported 00:22:41.078 Abort (08h): Supported 00:22:41.078 Set Features (09h): Supported 00:22:41.078 Get Features (0Ah): Supported 00:22:41.078 Asynchronous Event Request (0Ch): Supported 00:22:41.078 Keep Alive (18h): Supported 00:22:41.078 I/O Commands 00:22:41.078 ------------ 00:22:41.078 Flush (00h): Supported LBA-Change 00:22:41.078 Write (01h): Supported LBA-Change 00:22:41.078 Read (02h): Supported 00:22:41.078 Compare (05h): Supported 00:22:41.078 Write Zeroes (08h): Supported LBA-Change 00:22:41.078 Dataset Management (09h): Supported LBA-Change 00:22:41.078 Copy (19h): Supported LBA-Change 00:22:41.078 00:22:41.078 Error Log 00:22:41.078 ========= 00:22:41.078 00:22:41.078 Arbitration 00:22:41.078 =========== 00:22:41.078 Arbitration Burst: 1 00:22:41.078 00:22:41.078 Power Management 00:22:41.078 ================ 00:22:41.078 Number of Power States: 1 00:22:41.078 Current Power State: Power State #0 00:22:41.078 Power State #0: 00:22:41.078 Max Power: 0.00 W 00:22:41.078 Non-Operational State: Operational 00:22:41.078 Entry Latency: Not Reported 00:22:41.078 Exit Latency: Not Reported 00:22:41.078 Relative Read Throughput: 0 00:22:41.078 Relative Read Latency: 0 00:22:41.078 Relative Write Throughput: 0 00:22:41.078 Relative Write Latency: 0 00:22:41.078 Idle Power: Not Reported 00:22:41.078 Active Power: Not Reported 00:22:41.078 Non-Operational Permissive Mode: Not Supported 00:22:41.078 00:22:41.078 Health Information 00:22:41.078 ================== 00:22:41.078 Critical Warnings: 00:22:41.078 Available Spare Space: OK 00:22:41.078 Temperature: OK 00:22:41.078 Device Reliability: OK 00:22:41.078 Read Only: No 00:22:41.078 Volatile Memory Backup: OK 00:22:41.078 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:41.078 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:41.078 Available Spare: 0% 00:22:41.078 Available Spare Threshold: 0% 00:22:41.078 Life Percentage Used:[2024-11-20 09:56:17.759823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.078 [2024-11-20 09:56:17.759836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14aa690) 00:22:41.078 [2024-11-20 09:56:17.759847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.078 [2024-11-20 09:56:17.759869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150cb80, cid 7, qid 0 00:22:41.078 [2024-11-20 09:56:17.759980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.078 [2024-11-20 09:56:17.759994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.078 [2024-11-20 09:56:17.760001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.078 [2024-11-20 09:56:17.760008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150cb80) on tqpair=0x14aa690 00:22:41.078 [2024-11-20 09:56:17.760051] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:41.078 [2024-11-20 09:56:17.760071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c100) on tqpair=0x14aa690 00:22:41.078 [2024-11-20 09:56:17.760082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.078 [2024-11-20 09:56:17.760091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c280) on tqpair=0x14aa690 00:22:41.078 [2024-11-20 09:56:17.760099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.078 [2024-11-20 09:56:17.760107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c400) on tqpair=0x14aa690 00:22:41.078 [2024-11-20 09:56:17.760115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.078 [2024-11-20 09:56:17.760123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.078 [2024-11-20 09:56:17.760131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.078 [2024-11-20 09:56:17.760143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.078 [2024-11-20 09:56:17.760151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.078 [2024-11-20 09:56:17.760158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.078 [2024-11-20 09:56:17.760168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.078 [2024-11-20 09:56:17.760191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.078 [2024-11-20 09:56:17.760265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.078 [2024-11-20 09:56:17.760278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.078 [2024-11-20 09:56:17.760285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.078 [2024-11-20 09:56:17.760292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.078 [2024-11-20 09:56:17.760311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.078 [2024-11-20 09:56:17.760321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.078 [2024-11-20 09:56:17.760328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.078 [2024-11-20 09:56:17.760338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.078 [2024-11-20 09:56:17.760365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.078 [2024-11-20 09:56:17.760463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.078 [2024-11-20 09:56:17.760476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.078 [2024-11-20 09:56:17.760483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.078 [2024-11-20 09:56:17.760493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.078 [2024-11-20 09:56:17.760502] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:41.078 [2024-11-20 09:56:17.760510] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:41.079 [2024-11-20 09:56:17.760526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.760534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.760541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.760552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.760572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.760700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.760714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.760721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.760728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.760744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.760753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.760760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.760771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.760791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.760872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.760886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.760893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.760900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.760916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.760925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.760932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.760942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.760963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.761053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.761066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.761073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.761096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.761122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.761143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.761222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.761239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.761247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.761271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.761297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.761326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.761423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.761437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.761445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.761467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.761494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.761515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.761592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.761606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.761613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.761636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.761662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.761682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.761773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.761785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.761792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.761814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.761841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.761861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.761934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.761946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.761960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.761984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.761999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.762010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.762031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.762108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.762122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.762129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.762136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.762152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.762161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.762168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.762178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.762199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.762273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.762286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.762293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.762300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.766331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.766342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.766364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14aa690) 00:22:41.079 [2024-11-20 09:56:17.766375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.079 [2024-11-20 09:56:17.766398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x150c580, cid 3, qid 0 00:22:41.079 [2024-11-20 09:56:17.766524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:41.079 [2024-11-20 09:56:17.766536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:41.079 [2024-11-20 09:56:17.766544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:41.079 [2024-11-20 09:56:17.766550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x150c580) on tqpair=0x14aa690 00:22:41.079 [2024-11-20 09:56:17.766563] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:41.079 0% 00:22:41.079 Data Units Read: 0 00:22:41.079 Data Units Written: 0 00:22:41.079 Host Read Commands: 0 00:22:41.079 Host Write Commands: 0 00:22:41.079 Controller Busy Time: 0 minutes 00:22:41.079 Power Cycles: 0 00:22:41.079 Power On Hours: 0 hours 00:22:41.079 Unsafe Shutdowns: 0 00:22:41.079 Unrecoverable Media Errors: 0 00:22:41.079 Lifetime Error Log Entries: 0 00:22:41.079 Warning Temperature Time: 0 minutes 00:22:41.079 Critical Temperature Time: 0 minutes 00:22:41.079 00:22:41.080 Number of Queues 00:22:41.080 ================ 00:22:41.080 Number of I/O Submission Queues: 127 00:22:41.080 Number of I/O Completion Queues: 127 00:22:41.080 00:22:41.080 Active Namespaces 00:22:41.080 ================= 00:22:41.080 Namespace ID:1 00:22:41.080 Error Recovery Timeout: Unlimited 00:22:41.080 Command Set Identifier: NVM (00h) 00:22:41.080 Deallocate: Supported 00:22:41.080 Deallocated/Unwritten Error: Not Supported 00:22:41.080 Deallocated Read Value: Unknown 00:22:41.080 Deallocate in Write Zeroes: Not Supported 00:22:41.080 Deallocated Guard Field: 0xFFFF 00:22:41.080 Flush: Supported 00:22:41.080 Reservation: Supported 00:22:41.080 Namespace Sharing Capabilities: Multiple Controllers 00:22:41.080 Size (in LBAs): 131072 (0GiB) 00:22:41.080 Capacity (in LBAs): 131072 (0GiB) 00:22:41.080 Utilization (in LBAs): 131072 (0GiB) 00:22:41.080 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:41.080 EUI64: ABCDEF0123456789 00:22:41.080 UUID: dd4d0435-1442-407c-a9de-6bf6fa082b6a 00:22:41.080 Thin Provisioning: Not Supported 00:22:41.080 Per-NS Atomic Units: Yes 00:22:41.080 Atomic Boundary Size (Normal): 0 00:22:41.080 Atomic Boundary Size (PFail): 0 00:22:41.080 Atomic Boundary Offset: 0 00:22:41.080 Maximum Single Source Range Length: 65535 00:22:41.080 Maximum Copy Length: 65535 00:22:41.080 Maximum Source Range Count: 1 00:22:41.080 NGUID/EUI64 Never Reused: No 00:22:41.080 Namespace Write Protected: No 00:22:41.080 Number of LBA Formats: 1 00:22:41.080 Current LBA Format: LBA Format #00 00:22:41.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:41.080 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.080 rmmod nvme_tcp 00:22:41.080 rmmod nvme_fabrics 00:22:41.080 rmmod nvme_keyring 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3797943 ']' 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3797943 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3797943 ']' 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3797943 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3797943 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3797943' 00:22:41.080 killing process with pid 3797943 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3797943 00:22:41.080 09:56:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3797943 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.339 09:56:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.886 00:22:43.886 real 0m5.494s 00:22:43.886 user 0m4.536s 00:22:43.886 sys 0m1.912s 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.886 ************************************ 00:22:43.886 END TEST nvmf_identify 00:22:43.886 ************************************ 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.886 ************************************ 00:22:43.886 START TEST nvmf_perf 00:22:43.886 ************************************ 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:43.886 * Looking for test storage... 00:22:43.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:43.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.886 --rc genhtml_branch_coverage=1 00:22:43.886 --rc genhtml_function_coverage=1 00:22:43.886 --rc genhtml_legend=1 00:22:43.886 --rc geninfo_all_blocks=1 00:22:43.886 --rc geninfo_unexecuted_blocks=1 00:22:43.886 00:22:43.886 ' 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:43.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.886 --rc genhtml_branch_coverage=1 00:22:43.886 --rc genhtml_function_coverage=1 00:22:43.886 --rc genhtml_legend=1 00:22:43.886 --rc geninfo_all_blocks=1 00:22:43.886 --rc geninfo_unexecuted_blocks=1 00:22:43.886 00:22:43.886 ' 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:43.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.886 --rc genhtml_branch_coverage=1 00:22:43.886 --rc genhtml_function_coverage=1 00:22:43.886 --rc genhtml_legend=1 00:22:43.886 --rc geninfo_all_blocks=1 00:22:43.886 --rc geninfo_unexecuted_blocks=1 00:22:43.886 00:22:43.886 ' 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:43.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.886 --rc genhtml_branch_coverage=1 00:22:43.886 --rc genhtml_function_coverage=1 00:22:43.886 --rc genhtml_legend=1 00:22:43.886 --rc geninfo_all_blocks=1 00:22:43.886 --rc geninfo_unexecuted_blocks=1 00:22:43.886 00:22:43.886 ' 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:43.886 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.887 09:56:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.825 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:45.825 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:45.826 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:45.826 Found net devices under 0000:09:00.0: cvl_0_0 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:45.826 Found net devices under 0000:09:00.1: cvl_0_1 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:22:45.826 00:22:45.826 --- 10.0.0.2 ping statistics --- 00:22:45.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.826 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:22:45.826 00:22:45.826 --- 10.0.0.1 ping statistics --- 00:22:45.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.826 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3800038 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3800038 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3800038 ']' 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.826 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.827 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.827 09:56:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:46.085 [2024-11-20 09:56:22.738955] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:22:46.085 [2024-11-20 09:56:22.739041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.085 [2024-11-20 09:56:22.813906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.085 [2024-11-20 09:56:22.875162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.085 [2024-11-20 09:56:22.875221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.085 [2024-11-20 09:56:22.875253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.085 [2024-11-20 09:56:22.875268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.085 [2024-11-20 09:56:22.875282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.085 [2024-11-20 09:56:22.876981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.085 [2024-11-20 09:56:22.877045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.085 [2024-11-20 09:56:22.877111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.085 [2024-11-20 09:56:22.877114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.342 09:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.342 09:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:46.342 09:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.342 09:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.343 09:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:46.343 09:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.343 09:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:46.343 09:56:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:49.625 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:49.625 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:49.625 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:22:49.625 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:49.884 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:49.884 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:22:49.884 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:49.884 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:49.884 09:56:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.142 [2024-11-20 09:56:26.989399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.142 09:56:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.399 09:56:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:50.399 09:56:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.656 09:56:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:50.656 09:56:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:51.246 09:56:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.246 [2024-11-20 09:56:28.073354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.246 09:56:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:51.503 09:56:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:22:51.503 09:56:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:51.503 09:56:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:51.503 09:56:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:52.880 Initializing NVMe Controllers 00:22:52.880 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:22:52.880 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:22:52.880 Initialization complete. Launching workers. 00:22:52.880 ======================================================== 00:22:52.880 Latency(us) 00:22:52.880 Device Information : IOPS MiB/s Average min max 00:22:52.880 PCIE (0000:0b:00.0) NSID 1 from core 0: 85319.87 333.28 374.34 31.55 5036.18 00:22:52.880 ======================================================== 00:22:52.880 Total : 85319.87 333.28 374.34 31.55 5036.18 00:22:52.880 00:22:52.880 09:56:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:54.250 Initializing NVMe Controllers 00:22:54.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:54.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:54.250 Initialization complete. Launching workers. 00:22:54.250 ======================================================== 00:22:54.250 Latency(us) 00:22:54.250 Device Information : IOPS MiB/s Average min max 00:22:54.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.66 0.37 10899.02 137.08 44950.36 00:22:54.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.83 0.18 21520.58 7009.81 48854.61 00:22:54.250 ======================================================== 00:22:54.250 Total : 141.50 0.55 14414.61 137.08 48854.61 00:22:54.250 00:22:54.250 09:56:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:55.622 Initializing NVMe Controllers 00:22:55.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:55.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:55.622 Initialization complete. Launching workers. 00:22:55.622 ======================================================== 00:22:55.622 Latency(us) 00:22:55.622 Device Information : IOPS MiB/s Average min max 00:22:55.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8522.00 33.29 3763.11 632.42 8306.79 00:22:55.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3856.00 15.06 8334.14 5783.88 16769.81 00:22:55.622 ======================================================== 00:22:55.622 Total : 12378.00 48.35 5187.08 632.42 16769.81 00:22:55.622 00:22:55.622 09:56:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:55.622 09:56:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:55.622 09:56:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:58.153 Initializing NVMe Controllers 00:22:58.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.153 Controller IO queue size 128, less than required. 00:22:58.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.153 Controller IO queue size 128, less than required. 00:22:58.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:58.153 Initialization complete. Launching workers. 00:22:58.153 ======================================================== 00:22:58.153 Latency(us) 00:22:58.153 Device Information : IOPS MiB/s Average min max 00:22:58.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1516.76 379.19 86288.86 51412.68 133609.67 00:22:58.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.34 144.33 230233.43 103531.76 353767.42 00:22:58.153 ======================================================== 00:22:58.153 Total : 2094.09 523.52 125974.03 51412.68 353767.42 00:22:58.153 00:22:58.153 09:56:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:58.411 No valid NVMe controllers or AIO or URING devices found 00:22:58.411 Initializing NVMe Controllers 00:22:58.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.411 Controller IO queue size 128, less than required. 00:22:58.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.411 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:58.411 Controller IO queue size 128, less than required. 00:22:58.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:58.411 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:58.411 WARNING: Some requested NVMe devices were skipped 00:22:58.411 09:56:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:00.944 Initializing NVMe Controllers 00:23:00.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.944 Controller IO queue size 128, less than required. 00:23:00.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.944 Controller IO queue size 128, less than required. 00:23:00.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.944 Initialization complete. Launching workers. 00:23:00.944 00:23:00.944 ==================== 00:23:00.944 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:00.944 TCP transport: 00:23:00.944 polls: 8954 00:23:00.944 idle_polls: 5832 00:23:00.944 sock_completions: 3122 00:23:00.944 nvme_completions: 5903 00:23:00.944 submitted_requests: 8794 00:23:00.944 queued_requests: 1 00:23:00.944 00:23:00.944 ==================== 00:23:00.944 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:00.944 TCP transport: 00:23:00.944 polls: 11880 00:23:00.944 idle_polls: 8431 00:23:00.944 sock_completions: 3449 00:23:00.944 nvme_completions: 6545 00:23:00.944 submitted_requests: 9936 00:23:00.944 queued_requests: 1 00:23:00.944 ======================================================== 00:23:00.944 Latency(us) 00:23:00.944 Device Information : IOPS MiB/s Average min max 00:23:00.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1473.87 368.47 89003.12 51410.55 158332.75 00:23:00.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1634.19 408.55 78533.97 47961.13 111878.22 00:23:00.944 ======================================================== 00:23:00.944 Total : 3108.06 777.02 83498.53 47961.13 158332.75 00:23:00.944 00:23:00.944 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:00.944 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.202 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:01.202 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:01.202 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:01.202 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.202 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:01.202 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.202 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.203 rmmod nvme_tcp 00:23:01.203 rmmod nvme_fabrics 00:23:01.203 rmmod nvme_keyring 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3800038 ']' 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3800038 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3800038 ']' 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3800038 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3800038 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3800038' 00:23:01.203 killing process with pid 3800038 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3800038 00:23:01.203 09:56:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3800038 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.102 09:56:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.005 00:23:05.005 real 0m21.367s 00:23:05.005 user 1m5.349s 00:23:05.005 sys 0m5.720s 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:05.005 ************************************ 00:23:05.005 END TEST nvmf_perf 00:23:05.005 ************************************ 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.005 ************************************ 00:23:05.005 START TEST nvmf_fio_host 00:23:05.005 ************************************ 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:05.005 * Looking for test storage... 00:23:05.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:05.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.005 --rc genhtml_branch_coverage=1 00:23:05.005 --rc genhtml_function_coverage=1 00:23:05.005 --rc genhtml_legend=1 00:23:05.005 --rc geninfo_all_blocks=1 00:23:05.005 --rc geninfo_unexecuted_blocks=1 00:23:05.005 00:23:05.005 ' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:05.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.005 --rc genhtml_branch_coverage=1 00:23:05.005 --rc genhtml_function_coverage=1 00:23:05.005 --rc genhtml_legend=1 00:23:05.005 --rc geninfo_all_blocks=1 00:23:05.005 --rc geninfo_unexecuted_blocks=1 00:23:05.005 00:23:05.005 ' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:05.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.005 --rc genhtml_branch_coverage=1 00:23:05.005 --rc genhtml_function_coverage=1 00:23:05.005 --rc genhtml_legend=1 00:23:05.005 --rc geninfo_all_blocks=1 00:23:05.005 --rc geninfo_unexecuted_blocks=1 00:23:05.005 00:23:05.005 ' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:05.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.005 --rc genhtml_branch_coverage=1 00:23:05.005 --rc genhtml_function_coverage=1 00:23:05.005 --rc genhtml_legend=1 00:23:05.005 --rc geninfo_all_blocks=1 00:23:05.005 --rc geninfo_unexecuted_blocks=1 00:23:05.005 00:23:05.005 ' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.005 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.006 09:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:07.536 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:07.536 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:07.536 Found net devices under 0000:09:00.0: cvl_0_0 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.536 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:07.537 Found net devices under 0000:09:00.1: cvl_0_1 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.537 09:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:23:07.537 00:23:07.537 --- 10.0.0.2 ping statistics --- 00:23:07.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.537 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:23:07.537 00:23:07.537 --- 10.0.0.1 ping statistics --- 00:23:07.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.537 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3804005 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3804005 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3804005 ']' 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.537 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.537 [2024-11-20 09:56:44.200553] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:23:07.537 [2024-11-20 09:56:44.200637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.537 [2024-11-20 09:56:44.273791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.537 [2024-11-20 09:56:44.332663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.537 [2024-11-20 09:56:44.332710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.537 [2024-11-20 09:56:44.332733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.537 [2024-11-20 09:56:44.332743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.537 [2024-11-20 09:56:44.332753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.537 [2024-11-20 09:56:44.334394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.537 [2024-11-20 09:56:44.334418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.537 [2024-11-20 09:56:44.334441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.537 [2024-11-20 09:56:44.334446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.794 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.794 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:07.794 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:08.052 [2024-11-20 09:56:44.708351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.052 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:08.052 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.052 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.052 09:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:08.309 Malloc1 00:23:08.309 09:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:08.568 09:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:08.868 09:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.125 [2024-11-20 09:56:45.955080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.126 09:56:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:09.382 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:09.383 09:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:09.640 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:09.640 fio-3.35 00:23:09.640 Starting 1 thread 00:23:12.163 00:23:12.163 test: (groupid=0, jobs=1): err= 0: pid=3804365: Wed Nov 20 09:56:48 2024 00:23:12.163 read: IOPS=8919, BW=34.8MiB/s (36.5MB/s)(69.9MiB/2007msec) 00:23:12.163 slat (nsec): min=1905, max=164639, avg=2486.43, stdev=1902.23 00:23:12.163 clat (usec): min=2612, max=13757, avg=7805.39, stdev=663.90 00:23:12.163 lat (usec): min=2638, max=13759, avg=7807.88, stdev=663.79 00:23:12.163 clat percentiles (usec): 00:23:12.163 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7308], 00:23:12.163 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:23:12.163 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:23:12.163 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11731], 99.95th=[12649], 00:23:12.163 | 99.99th=[13698] 00:23:12.163 bw ( KiB/s): min=34624, max=36320, per=99.95%, avg=35658.00, stdev=725.30, samples=4 00:23:12.163 iops : min= 8656, max= 9080, avg=8914.50, stdev=181.32, samples=4 00:23:12.163 write: IOPS=8933, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2007msec); 0 zone resets 00:23:12.163 slat (usec): min=2, max=134, avg= 2.60, stdev= 1.42 00:23:12.163 clat (usec): min=1452, max=11793, avg=6465.66, stdev=536.63 00:23:12.163 lat (usec): min=1461, max=11796, avg=6468.27, stdev=536.57 00:23:12.163 clat percentiles (usec): 00:23:12.163 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 6063], 00:23:12.163 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:23:12.163 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7111], 95.00th=[ 7242], 00:23:12.163 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[10159], 99.95th=[10683], 00:23:12.163 | 99.99th=[11731] 00:23:12.163 bw ( KiB/s): min=35480, max=35968, per=100.00%, avg=35750.00, stdev=213.90, samples=4 00:23:12.163 iops : min= 8870, max= 8992, avg=8937.50, stdev=53.48, samples=4 00:23:12.163 lat (msec) : 2=0.03%, 4=0.12%, 10=99.70%, 20=0.15% 00:23:12.163 cpu : usr=65.50%, sys=32.85%, ctx=76, majf=0, minf=32 00:23:12.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:12.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:12.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:12.163 issued rwts: total=17901,17930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:12.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:12.163 00:23:12.163 Run status group 0 (all jobs): 00:23:12.163 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2007-2007msec 00:23:12.163 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2007-2007msec 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:12.163 09:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:12.163 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:12.163 fio-3.35 00:23:12.163 Starting 1 thread 00:23:14.691 00:23:14.691 test: (groupid=0, jobs=1): err= 0: pid=3804819: Wed Nov 20 09:56:51 2024 00:23:14.691 read: IOPS=8235, BW=129MiB/s (135MB/s)(259MiB/2012msec) 00:23:14.691 slat (nsec): min=2793, max=93675, avg=3765.44, stdev=1859.79 00:23:14.691 clat (usec): min=2196, max=17017, avg=8927.65, stdev=2086.97 00:23:14.691 lat (usec): min=2199, max=17020, avg=8931.42, stdev=2087.01 00:23:14.691 clat percentiles (usec): 00:23:14.691 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7111], 00:23:14.691 | 30.00th=[ 7767], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9372], 00:23:14.691 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11338], 95.00th=[12518], 00:23:14.691 | 99.00th=[15008], 99.50th=[15664], 99.90th=[16450], 99.95th=[16581], 00:23:14.691 | 99.99th=[16712] 00:23:14.691 bw ( KiB/s): min=59392, max=76608, per=51.79%, avg=68248.00, stdev=8865.15, samples=4 00:23:14.691 iops : min= 3712, max= 4788, avg=4265.50, stdev=554.07, samples=4 00:23:14.691 write: IOPS=4854, BW=75.8MiB/s (79.5MB/s)(139MiB/1834msec); 0 zone resets 00:23:14.691 slat (usec): min=30, max=163, avg=34.03, stdev= 5.61 00:23:14.691 clat (usec): min=3203, max=18627, avg=11510.31, stdev=2006.44 00:23:14.691 lat (usec): min=3235, max=18658, avg=11544.34, stdev=2006.37 00:23:14.692 clat percentiles (usec): 00:23:14.692 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9765], 00:23:14.692 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:23:14.692 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14353], 95.00th=[15139], 00:23:14.692 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17695], 99.95th=[18220], 00:23:14.692 | 99.99th=[18744] 00:23:14.692 bw ( KiB/s): min=60352, max=79872, per=91.37%, avg=70968.00, stdev=9645.28, samples=4 00:23:14.692 iops : min= 3772, max= 4992, avg=4435.50, stdev=602.83, samples=4 00:23:14.692 lat (msec) : 4=0.18%, 10=54.83%, 20=45.00% 00:23:14.692 cpu : usr=76.78%, sys=21.93%, ctx=43, majf=0, minf=54 00:23:14.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:14.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:14.692 issued rwts: total=16570,8903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:14.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:14.692 00:23:14.692 Run status group 0 (all jobs): 00:23:14.692 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2012-2012msec 00:23:14.692 WRITE: bw=75.8MiB/s (79.5MB/s), 75.8MiB/s-75.8MiB/s (79.5MB/s-79.5MB/s), io=139MiB (146MB), run=1834-1834msec 00:23:14.692 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.950 rmmod nvme_tcp 00:23:14.950 rmmod nvme_fabrics 00:23:14.950 rmmod nvme_keyring 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3804005 ']' 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3804005 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3804005 ']' 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3804005 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:14.950 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.951 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3804005 00:23:14.951 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:14.951 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:14.951 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3804005' 00:23:14.951 killing process with pid 3804005 00:23:14.951 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3804005 00:23:14.951 09:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3804005 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.211 09:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.747 00:23:17.747 real 0m12.434s 00:23:17.747 user 0m36.639s 00:23:17.747 sys 0m3.993s 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.747 ************************************ 00:23:17.747 END TEST nvmf_fio_host 00:23:17.747 ************************************ 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.747 ************************************ 00:23:17.747 START TEST nvmf_failover 00:23:17.747 ************************************ 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:17.747 * Looking for test storage... 00:23:17.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.747 --rc genhtml_branch_coverage=1 00:23:17.747 --rc genhtml_function_coverage=1 00:23:17.747 --rc genhtml_legend=1 00:23:17.747 --rc geninfo_all_blocks=1 00:23:17.747 --rc geninfo_unexecuted_blocks=1 00:23:17.747 00:23:17.747 ' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.747 --rc genhtml_branch_coverage=1 00:23:17.747 --rc genhtml_function_coverage=1 00:23:17.747 --rc genhtml_legend=1 00:23:17.747 --rc geninfo_all_blocks=1 00:23:17.747 --rc geninfo_unexecuted_blocks=1 00:23:17.747 00:23:17.747 ' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.747 --rc genhtml_branch_coverage=1 00:23:17.747 --rc genhtml_function_coverage=1 00:23:17.747 --rc genhtml_legend=1 00:23:17.747 --rc geninfo_all_blocks=1 00:23:17.747 --rc geninfo_unexecuted_blocks=1 00:23:17.747 00:23:17.747 ' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.747 --rc genhtml_branch_coverage=1 00:23:17.747 --rc genhtml_function_coverage=1 00:23:17.747 --rc genhtml_legend=1 00:23:17.747 --rc geninfo_all_blocks=1 00:23:17.747 --rc geninfo_unexecuted_blocks=1 00:23:17.747 00:23:17.747 ' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.747 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.748 09:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:19.649 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:19.649 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:19.649 Found net devices under 0000:09:00.0: cvl_0_0 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:19.649 Found net devices under 0000:09:00.1: cvl_0_1 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.649 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:19.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:23:19.650 00:23:19.650 --- 10.0.0.2 ping statistics --- 00:23:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.650 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:23:19.650 00:23:19.650 --- 10.0.0.1 ping statistics --- 00:23:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.650 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3807018 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3807018 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3807018 ']' 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.650 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:19.908 [2024-11-20 09:56:56.602712] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:23:19.908 [2024-11-20 09:56:56.602795] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.908 [2024-11-20 09:56:56.674083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:19.908 [2024-11-20 09:56:56.731543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.908 [2024-11-20 09:56:56.731606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.908 [2024-11-20 09:56:56.731634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.908 [2024-11-20 09:56:56.731646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.908 [2024-11-20 09:56:56.731656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.908 [2024-11-20 09:56:56.733180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.908 [2024-11-20 09:56:56.733230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.908 [2024-11-20 09:56:56.733234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.166 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.166 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:20.166 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.166 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.166 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.166 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.166 09:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:20.425 [2024-11-20 09:56:57.138821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.425 09:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:20.683 Malloc0 00:23:20.683 09:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:20.941 09:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:21.505 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.505 [2024-11-20 09:56:58.384143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.505 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:21.761 [2024-11-20 09:56:58.648876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.761 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:22.018 [2024-11-20 09:56:58.917789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3807307 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3807307 /var/tmp/bdevperf.sock 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3807307 ']' 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.312 09:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.586 09:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.586 09:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:22.587 09:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:22.845 NVMe0n1 00:23:22.845 09:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:23.410 00:23:23.410 09:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3807475 00:23:23.410 09:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.410 09:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:24.343 09:57:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.601 [2024-11-20 09:57:01.431743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.601 [2024-11-20 09:57:01.431925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.431938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.431950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.431962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.431974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.431987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.431999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.432992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 [2024-11-20 09:57:01.433208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c340 is same with the state(6) to be set 00:23:24.602 09:57:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:27.882 09:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:28.141 00:23:28.141 09:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:28.399 [2024-11-20 09:57:05.138652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ce40 is same with the state(6) to be set 00:23:28.399 [2024-11-20 09:57:05.138719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ce40 is same with the state(6) to be set 00:23:28.399 [2024-11-20 09:57:05.138735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ce40 is same with the state(6) to be set 00:23:28.399 [2024-11-20 09:57:05.138748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ce40 is same with the state(6) to be set 00:23:28.399 [2024-11-20 09:57:05.138760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ce40 is same with the state(6) to be set 00:23:28.399 [2024-11-20 09:57:05.138772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ce40 is same with the state(6) to be set 00:23:28.399 09:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:31.684 09:57:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.684 [2024-11-20 09:57:08.463730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.684 09:57:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:32.618 09:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:32.876 [2024-11-20 09:57:09.750862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.750923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.750939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.750951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.750980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.876 [2024-11-20 09:57:09.751256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 [2024-11-20 09:57:09.751406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf62220 is same with the state(6) to be set 00:23:32.877 09:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3807475 00:23:39.468 { 00:23:39.468 "results": [ 00:23:39.468 { 00:23:39.468 "job": "NVMe0n1", 00:23:39.468 "core_mask": "0x1", 00:23:39.468 "workload": "verify", 00:23:39.468 "status": "finished", 00:23:39.468 "verify_range": { 00:23:39.468 "start": 0, 00:23:39.468 "length": 16384 00:23:39.468 }, 00:23:39.468 "queue_depth": 128, 00:23:39.468 "io_size": 4096, 00:23:39.468 "runtime": 15.008683, 00:23:39.468 "iops": 8503.744132646416, 00:23:39.468 "mibps": 33.21775051815006, 00:23:39.468 "io_failed": 7309, 00:23:39.468 "io_timeout": 0, 00:23:39.468 "avg_latency_us": 14209.210953097323, 00:23:39.468 "min_latency_us": 524.8948148148148, 00:23:39.468 "max_latency_us": 17476.266666666666 00:23:39.468 } 00:23:39.468 ], 00:23:39.468 "core_count": 1 00:23:39.468 } 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3807307 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3807307 ']' 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3807307 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3807307 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3807307' 00:23:39.468 killing process with pid 3807307 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3807307 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3807307 00:23:39.468 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:39.468 [2024-11-20 09:56:58.986246] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:23:39.468 [2024-11-20 09:56:58.986374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807307 ] 00:23:39.468 [2024-11-20 09:56:59.058523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.468 [2024-11-20 09:56:59.118086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.468 Running I/O for 15 seconds... 00:23:39.468 8502.00 IOPS, 33.21 MiB/s [2024-11-20T08:57:16.382Z] [2024-11-20 09:57:01.433702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.468 [2024-11-20 09:57:01.433748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.468 [2024-11-20 09:57:01.433775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.468 [2024-11-20 09:57:01.433791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.468 [2024-11-20 09:57:01.433808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.468 [2024-11-20 09:57:01.433822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.468 [2024-11-20 09:57:01.433838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.468 [2024-11-20 09:57:01.433852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.468 [2024-11-20 09:57:01.433868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.468 [2024-11-20 09:57:01.433883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.468 [2024-11-20 09:57:01.433898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.468 [2024-11-20 09:57:01.433912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.468 [2024-11-20 09:57:01.433928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.468 [2024-11-20 09:57:01.433943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.433958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.433972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.433987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.434971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.434986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.435000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.435015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.435029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.435043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.435057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.435072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.435085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.469 [2024-11-20 09:57:01.435100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.469 [2024-11-20 09:57:01.435114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.435978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.435993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.436007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.470 [2024-11-20 09:57:01.436035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.470 [2024-11-20 09:57:01.436064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.470 [2024-11-20 09:57:01.436092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.470 [2024-11-20 09:57:01.436121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.470 [2024-11-20 09:57:01.436150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.470 [2024-11-20 09:57:01.436179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.470 [2024-11-20 09:57:01.436208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.470 [2024-11-20 09:57:01.436237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.470 [2024-11-20 09:57:01.436252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.470 [2024-11-20 09:57:01.436265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.436983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.436997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.437011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.437026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.437040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.437055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.437079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.471 [2024-11-20 09:57:01.437097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.471 [2024-11-20 09:57:01.437117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.472 [2024-11-20 09:57:01.437535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.472 [2024-11-20 09:57:01.437581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.472 [2024-11-20 09:57:01.437601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79752 len:8 PRP1 0x0 PRP2 0x0 00:23:39.472 [2024-11-20 09:57:01.437615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437691] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:39.472 [2024-11-20 09:57:01.437728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.472 [2024-11-20 09:57:01.437746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.472 [2024-11-20 09:57:01.437773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.472 [2024-11-20 09:57:01.437799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.472 [2024-11-20 09:57:01.437825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:01.437846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:39.472 [2024-11-20 09:57:01.441135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:39.472 [2024-11-20 09:57:01.441174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc05560 (9): Bad file descriptor 00:23:39.472 [2024-11-20 09:57:01.465211] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:39.472 8359.00 IOPS, 32.65 MiB/s [2024-11-20T08:57:16.386Z] 8476.00 IOPS, 33.11 MiB/s [2024-11-20T08:57:16.386Z] 8526.00 IOPS, 33.30 MiB/s [2024-11-20T08:57:16.386Z] [2024-11-20 09:57:05.139610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.472 [2024-11-20 09:57:05.139669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:05.139710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.472 [2024-11-20 09:57:05.139727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:05.139753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.472 [2024-11-20 09:57:05.139769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.472 [2024-11-20 09:57:05.139785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.472 [2024-11-20 09:57:05.139800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.139815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.139829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.139845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.139860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.139875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.139890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.139905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.139919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.139934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.139949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.139964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.139978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.139993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.473 [2024-11-20 09:57:05.140507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.473 [2024-11-20 09:57:05.140522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.140981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.140995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.141010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.141025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.141040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.141054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.141069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.474 [2024-11-20 09:57:05.141083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.141098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.474 [2024-11-20 09:57:05.141113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.141131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.474 [2024-11-20 09:57:05.141146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.141161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.474 [2024-11-20 09:57:05.141175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.474 [2024-11-20 09:57:05.141191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.474 [2024-11-20 09:57:05.141205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.141981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.141996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.142011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.475 [2024-11-20 09:57:05.142029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.475 [2024-11-20 09:57:05.142045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.476 [2024-11-20 09:57:05.142854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.476 [2024-11-20 09:57:05.142868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.142883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.477 [2024-11-20 09:57:05.142897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.142912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.477 [2024-11-20 09:57:05.142927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.142942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.477 [2024-11-20 09:57:05.142956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.142971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.477 [2024-11-20 09:57:05.142985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.477 [2024-11-20 09:57:05.143014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.477 [2024-11-20 09:57:05.143043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:23:39.477 [2024-11-20 09:57:05.143775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.477 [2024-11-20 09:57:05.143788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.477 [2024-11-20 09:57:05.143799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.477 [2024-11-20 09:57:05.143811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:23:39.478 [2024-11-20 09:57:05.143824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:05.143837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.478 [2024-11-20 09:57:05.143849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.478 [2024-11-20 09:57:05.143860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:23:39.478 [2024-11-20 09:57:05.143873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:05.143937] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:39.478 [2024-11-20 09:57:05.143976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.478 [2024-11-20 09:57:05.143995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:05.144011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.478 [2024-11-20 09:57:05.144024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:05.144038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.478 [2024-11-20 09:57:05.144056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:05.144070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.478 [2024-11-20 09:57:05.144083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:05.144096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:39.478 [2024-11-20 09:57:05.147349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:39.478 [2024-11-20 09:57:05.147389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc05560 (9): Bad file descriptor 00:23:39.478 [2024-11-20 09:57:05.177577] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:39.478 8460.40 IOPS, 33.05 MiB/s [2024-11-20T08:57:16.392Z] 8489.33 IOPS, 33.16 MiB/s [2024-11-20T08:57:16.392Z] 8542.43 IOPS, 33.37 MiB/s [2024-11-20T08:57:16.392Z] 8565.75 IOPS, 33.46 MiB/s [2024-11-20T08:57:16.392Z] 8555.00 IOPS, 33.42 MiB/s [2024-11-20T08:57:16.392Z] [2024-11-20 09:57:09.751828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.751874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.751900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.751916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.751932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.751947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.751962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.751976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.751991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.752005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.752033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.752062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.752091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.752119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.478 [2024-11-20 09:57:09.752155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.478 [2024-11-20 09:57:09.752662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.478 [2024-11-20 09:57:09.752676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.752985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.752998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.479 [2024-11-20 09:57:09.753674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.479 [2024-11-20 09:57:09.753692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.753976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.753989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.754018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.754046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.754078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.754106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.480 [2024-11-20 09:57:09.754595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.480 [2024-11-20 09:57:09.754875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.480 [2024-11-20 09:57:09.754889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.754905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.754920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.754935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.754948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.754964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.754977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.754992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.481 [2024-11-20 09:57:09.755541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.481 [2024-11-20 09:57:09.755570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.481 [2024-11-20 09:57:09.755622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14072 len:8 PRP1 0x0 PRP2 0x0 00:23:39.481 [2024-11-20 09:57:09.755636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.481 [2024-11-20 09:57:09.755667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.481 [2024-11-20 09:57:09.755679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:8 PRP1 0x0 PRP2 0x0 00:23:39.481 [2024-11-20 09:57:09.755691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.481 [2024-11-20 09:57:09.755715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.481 [2024-11-20 09:57:09.755730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14088 len:8 PRP1 0x0 PRP2 0x0 00:23:39.481 [2024-11-20 09:57:09.755744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.481 [2024-11-20 09:57:09.755769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.481 [2024-11-20 09:57:09.755781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14096 len:8 PRP1 0x0 PRP2 0x0 00:23:39.481 [2024-11-20 09:57:09.755793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.481 [2024-11-20 09:57:09.755818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.481 [2024-11-20 09:57:09.755829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14104 len:8 PRP1 0x0 PRP2 0x0 00:23:39.481 [2024-11-20 09:57:09.755842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:39.481 [2024-11-20 09:57:09.755867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:39.481 [2024-11-20 09:57:09.755878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:8 PRP1 0x0 PRP2 0x0 00:23:39.481 [2024-11-20 09:57:09.755891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.755960] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:39.481 [2024-11-20 09:57:09.755999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.481 [2024-11-20 09:57:09.756017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.756034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.481 [2024-11-20 09:57:09.756047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.756061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.481 [2024-11-20 09:57:09.756074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.756093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.481 [2024-11-20 09:57:09.756106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.481 [2024-11-20 09:57:09.756120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:39.481 [2024-11-20 09:57:09.759393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:39.481 [2024-11-20 09:57:09.759433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc05560 (9): Bad file descriptor 00:23:39.481 [2024-11-20 09:57:09.874847] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:39.482 8449.40 IOPS, 33.01 MiB/s [2024-11-20T08:57:16.396Z] 8458.18 IOPS, 33.04 MiB/s [2024-11-20T08:57:16.396Z] 8483.42 IOPS, 33.14 MiB/s [2024-11-20T08:57:16.396Z] 8487.62 IOPS, 33.15 MiB/s [2024-11-20T08:57:16.396Z] 8498.21 IOPS, 33.20 MiB/s [2024-11-20T08:57:16.396Z] 8500.13 IOPS, 33.20 MiB/s 00:23:39.482 Latency(us) 00:23:39.482 [2024-11-20T08:57:16.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.482 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:39.482 Verification LBA range: start 0x0 length 0x4000 00:23:39.482 NVMe0n1 : 15.01 8503.74 33.22 486.98 0.00 14209.21 524.89 17476.27 00:23:39.482 [2024-11-20T08:57:16.396Z] =================================================================================================================== 00:23:39.482 [2024-11-20T08:57:16.396Z] Total : 8503.74 33.22 486.98 0.00 14209.21 524.89 17476.27 00:23:39.482 Received shutdown signal, test time was about 15.000000 seconds 00:23:39.482 00:23:39.482 Latency(us) 00:23:39.482 [2024-11-20T08:57:16.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.482 [2024-11-20T08:57:16.396Z] =================================================================================================================== 00:23:39.482 [2024-11-20T08:57:16.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3809912 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3809912 /var/tmp/bdevperf.sock 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3809912 ']' 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:39.482 09:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:39.482 [2024-11-20 09:57:16.136822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.482 09:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:39.739 [2024-11-20 09:57:16.421627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:39.739 09:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:39.997 NVMe0n1 00:23:39.997 09:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:40.255 00:23:40.256 09:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:40.821 00:23:40.821 09:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.822 09:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:41.079 09:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:41.337 09:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:44.617 09:57:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:44.617 09:57:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:44.617 09:57:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3810585 00:23:44.617 09:57:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.617 09:57:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3810585 00:23:45.551 { 00:23:45.551 "results": [ 00:23:45.551 { 00:23:45.551 "job": "NVMe0n1", 00:23:45.551 "core_mask": "0x1", 00:23:45.551 "workload": "verify", 00:23:45.552 "status": "finished", 00:23:45.552 "verify_range": { 00:23:45.552 "start": 0, 00:23:45.552 "length": 16384 00:23:45.552 }, 00:23:45.552 "queue_depth": 128, 00:23:45.552 "io_size": 4096, 00:23:45.552 "runtime": 1.008388, 00:23:45.552 "iops": 8623.66470049227, 00:23:45.552 "mibps": 33.68619023629793, 00:23:45.552 "io_failed": 0, 00:23:45.552 "io_timeout": 0, 00:23:45.552 "avg_latency_us": 14770.371526116733, 00:23:45.552 "min_latency_us": 2281.6237037037035, 00:23:45.552 "max_latency_us": 12913.01925925926 00:23:45.552 } 00:23:45.552 ], 00:23:45.552 "core_count": 1 00:23:45.552 } 00:23:45.552 09:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.552 [2024-11-20 09:57:15.655071] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:23:45.552 [2024-11-20 09:57:15.655167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809912 ] 00:23:45.552 [2024-11-20 09:57:15.723644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.552 [2024-11-20 09:57:15.780583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.552 [2024-11-20 09:57:17.982729] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:45.552 [2024-11-20 09:57:17.982814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.552 [2024-11-20 09:57:17.982838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.552 [2024-11-20 09:57:17.982855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.552 [2024-11-20 09:57:17.982868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.552 [2024-11-20 09:57:17.982882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.552 [2024-11-20 09:57:17.982910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.552 [2024-11-20 09:57:17.982925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.552 [2024-11-20 09:57:17.982938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.552 [2024-11-20 09:57:17.982952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:45.552 [2024-11-20 09:57:17.982996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:45.552 [2024-11-20 09:57:17.983027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9e560 (9): Bad file descriptor 00:23:45.552 [2024-11-20 09:57:18.075441] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:45.552 Running I/O for 1 seconds... 00:23:45.552 8568.00 IOPS, 33.47 MiB/s 00:23:45.552 Latency(us) 00:23:45.552 [2024-11-20T08:57:22.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.552 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:45.552 Verification LBA range: start 0x0 length 0x4000 00:23:45.552 NVMe0n1 : 1.01 8623.66 33.69 0.00 0.00 14770.37 2281.62 12913.02 00:23:45.552 [2024-11-20T08:57:22.466Z] =================================================================================================================== 00:23:45.552 [2024-11-20T08:57:22.466Z] Total : 8623.66 33.69 0.00 0.00 14770.37 2281.62 12913.02 00:23:45.552 09:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:45.552 09:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:45.810 09:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:46.069 09:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:46.069 09:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:46.635 09:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:46.635 09:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:49.919 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:49.919 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:49.919 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3809912 00:23:49.919 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3809912 ']' 00:23:49.919 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3809912 00:23:49.919 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:49.919 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.919 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3809912 00:23:50.178 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.178 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.178 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3809912' 00:23:50.178 killing process with pid 3809912 00:23:50.178 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3809912 00:23:50.178 09:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3809912 00:23:50.178 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:50.178 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.436 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.436 rmmod nvme_tcp 00:23:50.694 rmmod nvme_fabrics 00:23:50.694 rmmod nvme_keyring 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3807018 ']' 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3807018 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3807018 ']' 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3807018 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3807018 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3807018' 00:23:50.694 killing process with pid 3807018 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3807018 00:23:50.694 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3807018 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.952 09:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.857 09:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:52.857 00:23:52.857 real 0m35.610s 00:23:52.857 user 2m5.349s 00:23:52.857 sys 0m6.078s 00:23:52.857 09:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.857 09:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:52.857 ************************************ 00:23:52.857 END TEST nvmf_failover 00:23:52.857 ************************************ 00:23:52.857 09:57:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:52.857 09:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:52.857 09:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.857 09:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.116 ************************************ 00:23:53.116 START TEST nvmf_host_discovery 00:23:53.116 ************************************ 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:53.116 * Looking for test storage... 00:23:53.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:53.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.116 --rc genhtml_branch_coverage=1 00:23:53.116 --rc genhtml_function_coverage=1 00:23:53.116 --rc genhtml_legend=1 00:23:53.116 --rc geninfo_all_blocks=1 00:23:53.116 --rc geninfo_unexecuted_blocks=1 00:23:53.116 00:23:53.116 ' 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:53.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.116 --rc genhtml_branch_coverage=1 00:23:53.116 --rc genhtml_function_coverage=1 00:23:53.116 --rc genhtml_legend=1 00:23:53.116 --rc geninfo_all_blocks=1 00:23:53.116 --rc geninfo_unexecuted_blocks=1 00:23:53.116 00:23:53.116 ' 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:53.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.116 --rc genhtml_branch_coverage=1 00:23:53.116 --rc genhtml_function_coverage=1 00:23:53.116 --rc genhtml_legend=1 00:23:53.116 --rc geninfo_all_blocks=1 00:23:53.116 --rc geninfo_unexecuted_blocks=1 00:23:53.116 00:23:53.116 ' 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:53.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.116 --rc genhtml_branch_coverage=1 00:23:53.116 --rc genhtml_function_coverage=1 00:23:53.116 --rc genhtml_legend=1 00:23:53.116 --rc geninfo_all_blocks=1 00:23:53.116 --rc geninfo_unexecuted_blocks=1 00:23:53.116 00:23:53.116 ' 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.116 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.117 09:57:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:55.646 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:55.646 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:55.646 Found net devices under 0000:09:00.0: cvl_0_0 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.646 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:55.647 Found net devices under 0000:09:00.1: cvl_0_1 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:23:55.647 00:23:55.647 --- 10.0.0.2 ping statistics --- 00:23:55.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.647 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:23:55.647 00:23:55.647 --- 10.0.0.1 ping statistics --- 00:23:55.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.647 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3813291 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3813291 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3813291 ']' 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.647 [2024-11-20 09:57:32.305864] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:23:55.647 [2024-11-20 09:57:32.305930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.647 [2024-11-20 09:57:32.375074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.647 [2024-11-20 09:57:32.430209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.647 [2024-11-20 09:57:32.430267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.647 [2024-11-20 09:57:32.430280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.647 [2024-11-20 09:57:32.430312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.647 [2024-11-20 09:57:32.430324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.647 [2024-11-20 09:57:32.430966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.647 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.905 [2024-11-20 09:57:32.574410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.905 [2024-11-20 09:57:32.582623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.905 null0 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.905 null1 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3813338 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3813338 /tmp/host.sock 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3813338 ']' 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:55.905 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:55.905 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.906 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.906 [2024-11-20 09:57:32.656971] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:23:55.906 [2024-11-20 09:57:32.657039] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813338 ] 00:23:55.906 [2024-11-20 09:57:32.723190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.906 [2024-11-20 09:57:32.781054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.163 09:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.163 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.164 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 [2024-11-20 09:57:33.212237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:56.421 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.422 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:56.422 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.422 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:56.679 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:57.245 [2024-11-20 09:57:33.961540] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:57.245 [2024-11-20 09:57:33.961564] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:57.245 [2024-11-20 09:57:33.961586] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:57.245 [2024-11-20 09:57:34.047904] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:57.245 [2024-11-20 09:57:34.102716] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:57.245 [2024-11-20 09:57:34.103623] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x70bf80:1 started. 00:23:57.245 [2024-11-20 09:57:34.105427] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:57.245 [2024-11-20 09:57:34.105447] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:57.245 [2024-11-20 09:57:34.109943] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x70bf80 was disconnected and freed. delete nvme_qpair. 00:23:57.503 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:57.503 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:57.503 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:57.504 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.504 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.504 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.504 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.504 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.504 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.504 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.762 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.021 [2024-11-20 09:57:34.762258] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x70c660:1 started. 00:23:58.021 [2024-11-20 09:57:34.771424] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x70c660 was disconnected and freed. delete nvme_qpair. 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.021 [2024-11-20 09:57:34.829195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.021 [2024-11-20 09:57:34.829763] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:58.021 [2024-11-20 09:57:34.829791] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:58.021 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.279 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:58.279 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:58.279 [2024-11-20 09:57:34.955573] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:58.279 [2024-11-20 09:57:35.057522] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:58.279 [2024-11-20 09:57:35.057564] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:58.279 [2024-11-20 09:57:35.057585] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:58.279 [2024-11-20 09:57:35.057595] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.214 09:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.214 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.214 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:59.214 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.215 [2024-11-20 09:57:36.037505] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:59.215 [2024-11-20 09:57:36.037544] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:59.215 [2024-11-20 09:57:36.046760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.215 [2024-11-20 09:57:36.046792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.215 [2024-11-20 09:57:36.046823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.215 [2024-11-20 09:57:36.046838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.215 [2024-11-20 09:57:36.046853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.215 [2024-11-20 09:57:36.046868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.215 [2024-11-20 09:57:36.046882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.215 [2024-11-20 09:57:36.046896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.215 [2024-11-20 09:57:36.046910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dc550 is same with the state(6) to be set 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.215 [2024-11-20 09:57:36.056751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc550 (9): Bad file descriptor 00:23:59.215 [2024-11-20 09:57:36.066788] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:59.215 [2024-11-20 09:57:36.066810] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:59.215 [2024-11-20 09:57:36.066820] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:59.215 [2024-11-20 09:57:36.066829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:59.215 [2024-11-20 09:57:36.066873] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:59.215 [2024-11-20 09:57:36.067012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.215 [2024-11-20 09:57:36.067042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6dc550 with addr=10.0.0.2, port=4420 00:23:59.215 [2024-11-20 09:57:36.067064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dc550 is same with the state(6) to be set 00:23:59.215 [2024-11-20 09:57:36.067088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc550 (9): Bad file descriptor 00:23:59.215 [2024-11-20 09:57:36.067111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:59.215 [2024-11-20 09:57:36.067126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:59.215 [2024-11-20 09:57:36.067141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:59.215 [2024-11-20 09:57:36.067155] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:59.215 [2024-11-20 09:57:36.067165] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:59.215 [2024-11-20 09:57:36.067174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:59.215 [2024-11-20 09:57:36.076905] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:59.215 [2024-11-20 09:57:36.076926] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:59.215 [2024-11-20 09:57:36.076935] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:59.215 [2024-11-20 09:57:36.076943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:59.215 [2024-11-20 09:57:36.076981] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:59.215 [2024-11-20 09:57:36.077136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.215 [2024-11-20 09:57:36.077164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6dc550 with addr=10.0.0.2, port=4420 00:23:59.215 [2024-11-20 09:57:36.077180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dc550 is same with the state(6) to be set 00:23:59.215 [2024-11-20 09:57:36.077203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc550 (9): Bad file descriptor 00:23:59.215 [2024-11-20 09:57:36.077224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:59.215 [2024-11-20 09:57:36.077238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:59.215 [2024-11-20 09:57:36.077253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:59.215 [2024-11-20 09:57:36.077266] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:59.215 [2024-11-20 09:57:36.077275] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:59.215 [2024-11-20 09:57:36.077283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.215 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:59.215 [2024-11-20 09:57:36.087015] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:59.215 [2024-11-20 09:57:36.087038] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:59.215 [2024-11-20 09:57:36.087047] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:59.215 [2024-11-20 09:57:36.087055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:59.215 [2024-11-20 09:57:36.087094] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:59.215 [2024-11-20 09:57:36.087193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.215 [2024-11-20 09:57:36.087235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6dc550 with addr=10.0.0.2, port=4420 00:23:59.215 [2024-11-20 09:57:36.087252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dc550 is same with the state(6) to be set 00:23:59.215 [2024-11-20 09:57:36.087274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc550 (9): Bad file descriptor 00:23:59.215 [2024-11-20 09:57:36.087320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:59.215 [2024-11-20 09:57:36.087336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:59.215 [2024-11-20 09:57:36.087358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:59.215 [2024-11-20 09:57:36.087370] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:59.215 [2024-11-20 09:57:36.087380] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:59.215 [2024-11-20 09:57:36.087388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:59.215 [2024-11-20 09:57:36.097128] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:59.215 [2024-11-20 09:57:36.097150] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:59.216 [2024-11-20 09:57:36.097160] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:59.216 [2024-11-20 09:57:36.097168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:59.216 [2024-11-20 09:57:36.097206] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:59.216 [2024-11-20 09:57:36.097341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.216 [2024-11-20 09:57:36.097370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6dc550 with addr=10.0.0.2, port=4420 00:23:59.216 [2024-11-20 09:57:36.097386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dc550 is same with the state(6) to be set 00:23:59.216 [2024-11-20 09:57:36.097421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc550 (9): Bad file descriptor 00:23:59.216 [2024-11-20 09:57:36.097451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:59.216 [2024-11-20 09:57:36.097467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:59.216 [2024-11-20 09:57:36.097481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:59.216 [2024-11-20 09:57:36.097493] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:59.216 [2024-11-20 09:57:36.097503] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:59.216 [2024-11-20 09:57:36.097511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:59.216 [2024-11-20 09:57:36.107240] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:59.216 [2024-11-20 09:57:36.107261] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:59.216 [2024-11-20 09:57:36.107270] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:59.216 [2024-11-20 09:57:36.107278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:59.216 [2024-11-20 09:57:36.107328] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:59.216 [2024-11-20 09:57:36.107466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.216 [2024-11-20 09:57:36.107494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6dc550 with addr=10.0.0.2, port=4420 00:23:59.216 [2024-11-20 09:57:36.107510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dc550 is same with the state(6) to be set 00:23:59.216 [2024-11-20 09:57:36.107532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc550 (9): Bad file descriptor 00:23:59.216 [2024-11-20 09:57:36.107566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:59.216 [2024-11-20 09:57:36.107583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:59.216 [2024-11-20 09:57:36.107597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:59.216 [2024-11-20 09:57:36.107609] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:59.216 [2024-11-20 09:57:36.107619] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:59.216 [2024-11-20 09:57:36.107627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:59.216 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.216 [2024-11-20 09:57:36.117362] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:59.216 [2024-11-20 09:57:36.117383] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:59.216 [2024-11-20 09:57:36.117392] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:59.216 [2024-11-20 09:57:36.117400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:59.216 [2024-11-20 09:57:36.117440] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:59.216 [2024-11-20 09:57:36.117587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.216 [2024-11-20 09:57:36.117614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6dc550 with addr=10.0.0.2, port=4420 00:23:59.216 [2024-11-20 09:57:36.117636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dc550 is same with the state(6) to be set 00:23:59.216 [2024-11-20 09:57:36.117659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc550 (9): Bad file descriptor 00:23:59.216 [2024-11-20 09:57:36.117680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:59.216 [2024-11-20 09:57:36.117694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:59.216 [2024-11-20 09:57:36.117707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:59.216 [2024-11-20 09:57:36.117720] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:59.216 [2024-11-20 09:57:36.117730] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:59.216 [2024-11-20 09:57:36.117738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:59.216 [2024-11-20 09:57:36.124378] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:59.216 [2024-11-20 09:57:36.124407] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:59.216 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:59.216 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:59.474 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.475 09:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.926 [2024-11-20 09:57:37.421012] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:00.926 [2024-11-20 09:57:37.421044] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:00.927 [2024-11-20 09:57:37.421067] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:00.927 [2024-11-20 09:57:37.548470] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:00.927 [2024-11-20 09:57:37.653233] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:00.927 [2024-11-20 09:57:37.654029] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x712a10:1 started. 00:24:00.927 [2024-11-20 09:57:37.656133] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:00.927 [2024-11-20 09:57:37.656176] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.927 request: 00:24:00.927 { 00:24:00.927 "name": "nvme", 00:24:00.927 "trtype": "tcp", 00:24:00.927 "traddr": "10.0.0.2", 00:24:00.927 "adrfam": "ipv4", 00:24:00.927 "trsvcid": "8009", 00:24:00.927 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:00.927 "wait_for_attach": true, 00:24:00.927 "method": "bdev_nvme_start_discovery", 00:24:00.927 "req_id": 1 00:24:00.927 } 00:24:00.927 Got JSON-RPC error response 00:24:00.927 response: 00:24:00.927 { 00:24:00.927 "code": -17, 00:24:00.927 "message": "File exists" 00:24:00.927 } 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.927 [2024-11-20 09:57:37.700013] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x712a10 was disconnected and freed. delete nvme_qpair. 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.927 request: 00:24:00.927 { 00:24:00.927 "name": "nvme_second", 00:24:00.927 "trtype": "tcp", 00:24:00.927 "traddr": "10.0.0.2", 00:24:00.927 "adrfam": "ipv4", 00:24:00.927 "trsvcid": "8009", 00:24:00.927 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:00.927 "wait_for_attach": true, 00:24:00.927 "method": "bdev_nvme_start_discovery", 00:24:00.927 "req_id": 1 00:24:00.927 } 00:24:00.927 Got JSON-RPC error response 00:24:00.927 response: 00:24:00.927 { 00:24:00.927 "code": -17, 00:24:00.927 "message": "File exists" 00:24:00.927 } 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.927 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.186 09:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.118 [2024-11-20 09:57:38.867576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.118 [2024-11-20 09:57:38.867630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f3aa0 with addr=10.0.0.2, port=8010 00:24:02.118 [2024-11-20 09:57:38.867655] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:02.118 [2024-11-20 09:57:38.867684] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:02.118 [2024-11-20 09:57:38.867697] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:03.052 [2024-11-20 09:57:39.869952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.052 [2024-11-20 09:57:39.869987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f3aa0 with addr=10.0.0.2, port=8010 00:24:03.052 [2024-11-20 09:57:39.870008] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:03.052 [2024-11-20 09:57:39.870021] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:03.052 [2024-11-20 09:57:39.870033] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:03.986 [2024-11-20 09:57:40.872232] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:03.986 request: 00:24:03.986 { 00:24:03.986 "name": "nvme_second", 00:24:03.986 "trtype": "tcp", 00:24:03.986 "traddr": "10.0.0.2", 00:24:03.986 "adrfam": "ipv4", 00:24:03.986 "trsvcid": "8010", 00:24:03.986 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:03.986 "wait_for_attach": false, 00:24:03.986 "attach_timeout_ms": 3000, 00:24:03.986 "method": "bdev_nvme_start_discovery", 00:24:03.986 "req_id": 1 00:24:03.986 } 00:24:03.986 Got JSON-RPC error response 00:24:03.986 response: 00:24:03.986 { 00:24:03.986 "code": -110, 00:24:03.986 "message": "Connection timed out" 00:24:03.986 } 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:03.986 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3813338 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.245 rmmod nvme_tcp 00:24:04.245 rmmod nvme_fabrics 00:24:04.245 rmmod nvme_keyring 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3813291 ']' 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3813291 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3813291 ']' 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3813291 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.245 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3813291 00:24:04.245 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.245 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.245 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3813291' 00:24:04.245 killing process with pid 3813291 00:24:04.245 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3813291 00:24:04.245 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3813291 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.505 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.412 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.412 00:24:06.412 real 0m13.519s 00:24:06.412 user 0m19.372s 00:24:06.412 sys 0m2.991s 00:24:06.412 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.412 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 ************************************ 00:24:06.412 END TEST nvmf_host_discovery 00:24:06.412 ************************************ 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.673 ************************************ 00:24:06.673 START TEST nvmf_host_multipath_status 00:24:06.673 ************************************ 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:06.673 * Looking for test storage... 00:24:06.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.673 --rc genhtml_branch_coverage=1 00:24:06.673 --rc genhtml_function_coverage=1 00:24:06.673 --rc genhtml_legend=1 00:24:06.673 --rc geninfo_all_blocks=1 00:24:06.673 --rc geninfo_unexecuted_blocks=1 00:24:06.673 00:24:06.673 ' 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.673 --rc genhtml_branch_coverage=1 00:24:06.673 --rc genhtml_function_coverage=1 00:24:06.673 --rc genhtml_legend=1 00:24:06.673 --rc geninfo_all_blocks=1 00:24:06.673 --rc geninfo_unexecuted_blocks=1 00:24:06.673 00:24:06.673 ' 00:24:06.673 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.673 --rc genhtml_branch_coverage=1 00:24:06.673 --rc genhtml_function_coverage=1 00:24:06.673 --rc genhtml_legend=1 00:24:06.674 --rc geninfo_all_blocks=1 00:24:06.674 --rc geninfo_unexecuted_blocks=1 00:24:06.674 00:24:06.674 ' 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.674 --rc genhtml_branch_coverage=1 00:24:06.674 --rc genhtml_function_coverage=1 00:24:06.674 --rc genhtml_legend=1 00:24:06.674 --rc geninfo_all_blocks=1 00:24:06.674 --rc geninfo_unexecuted_blocks=1 00:24:06.674 00:24:06.674 ' 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.674 09:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:09.221 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.221 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:09.222 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:09.222 Found net devices under 0000:09:00.0: cvl_0_0 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:09.222 Found net devices under 0000:09:00.1: cvl_0_1 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:24:09.222 00:24:09.222 --- 10.0.0.2 ping statistics --- 00:24:09.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.222 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:24:09.222 00:24:09.222 --- 10.0.0.1 ping statistics --- 00:24:09.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.222 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3816394 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3816394 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3816394 ']' 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.222 09:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.222 [2024-11-20 09:57:45.915092] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:24:09.223 [2024-11-20 09:57:45.915184] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.223 [2024-11-20 09:57:45.996327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:09.223 [2024-11-20 09:57:46.057002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.223 [2024-11-20 09:57:46.057059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.223 [2024-11-20 09:57:46.057088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.223 [2024-11-20 09:57:46.057100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.223 [2024-11-20 09:57:46.057110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.223 [2024-11-20 09:57:46.059053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.223 [2024-11-20 09:57:46.059060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.481 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.481 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:09.481 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.481 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.481 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.481 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.481 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3816394 00:24:09.481 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:09.739 [2024-11-20 09:57:46.479594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.739 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:09.996 Malloc0 00:24:09.996 09:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:10.254 09:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.819 09:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.077 [2024-11-20 09:57:47.737827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.077 09:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:11.336 [2024-11-20 09:57:48.010523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3816675 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3816675 /var/tmp/bdevperf.sock 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3816675 ']' 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.336 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:11.594 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.594 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:11.595 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:11.852 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:12.418 Nvme0n1 00:24:12.418 09:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:12.675 Nvme0n1 00:24:12.675 09:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:12.675 09:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:15.203 09:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:15.203 09:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:15.203 09:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.203 09:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:16.576 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:16.576 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.576 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.576 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.576 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.576 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:16.576 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.576 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.834 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.834 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.834 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.834 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.092 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.092 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.092 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.092 09:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.350 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.350 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.350 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.350 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.607 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.607 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.607 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.607 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.865 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.866 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:17.866 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.431 09:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.431 09:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:19.805 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:19.805 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:19.805 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.805 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.805 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.805 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:19.805 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.805 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.063 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.063 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.063 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.063 09:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.321 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.321 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.321 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.321 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.580 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.580 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.580 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.580 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.837 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.837 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.837 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.837 09:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.094 09:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.094 09:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:21.094 09:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.658 09:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:21.658 09:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:23.029 09:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:23.029 09:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:23.029 09:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.029 09:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:23.029 09:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.029 09:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:23.029 09:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.029 09:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:23.287 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.287 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:23.287 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.287 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.546 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.546 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.546 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.546 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.804 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.804 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.804 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.804 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.062 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.062 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:24.320 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.320 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.578 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.578 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:24.578 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:24.835 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:25.093 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:26.078 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:26.078 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:26.078 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.078 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.364 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.364 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:26.364 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.364 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.621 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.621 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.621 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.621 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.880 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.880 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.880 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.880 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.138 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.138 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:27.138 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.138 09:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.396 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.396 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:27.396 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.396 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.655 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.655 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:27.655 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:27.913 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:28.170 09:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:29.103 09:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:29.103 09:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:29.103 09:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.103 09:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.361 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.361 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:29.361 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.361 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.927 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.927 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.927 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.927 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.927 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.927 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.927 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.927 09:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.185 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.185 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:30.185 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.185 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.443 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.443 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:30.443 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.443 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.700 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.700 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:30.700 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:30.957 09:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:31.522 09:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:32.455 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:32.455 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:32.456 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.456 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.714 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.714 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.714 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.714 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.971 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.971 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.971 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.971 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:33.229 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.229 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:33.229 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.229 09:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.487 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.487 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:33.487 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.487 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.745 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:33.745 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.745 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.745 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.003 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.003 09:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:34.261 09:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:34.261 09:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:34.520 09:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:34.778 09:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:36.151 09:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:36.152 09:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:36.152 09:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.152 09:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.152 09:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.152 09:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:36.152 09:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.152 09:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.410 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.410 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.410 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.410 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.668 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.668 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.668 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.668 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:36.926 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.926 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:36.926 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.926 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.185 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.185 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:37.185 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.185 09:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.442 09:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.442 09:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:37.442 09:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:37.701 09:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:37.958 09:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:39.332 09:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:39.332 09:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:39.332 09:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.332 09:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.332 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:39.332 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:39.332 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.332 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:39.590 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.590 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:39.590 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.590 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:39.847 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.847 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.847 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.847 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:40.104 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.104 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:40.104 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.105 09:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.362 09:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.362 09:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:40.362 09:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.362 09:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.621 09:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.621 09:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:40.621 09:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:40.878 09:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:41.136 09:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:42.511 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:42.511 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:42.511 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.511 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:42.511 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.511 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:42.511 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.511 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:42.768 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.768 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:42.768 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.768 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.026 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.026 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.026 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.026 09:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.284 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.284 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:43.284 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.284 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:43.542 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.542 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:43.542 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.542 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:43.800 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.800 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:43.800 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.058 09:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:44.623 09:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:45.583 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:45.583 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:45.583 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.583 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:45.842 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.842 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:45.842 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.842 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.100 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.100 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.100 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.100 09:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:46.357 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.357 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:46.357 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.357 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:46.615 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.615 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:46.615 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.615 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:46.873 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.873 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:46.873 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.873 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3816675 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3816675 ']' 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3816675 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3816675 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3816675' 00:24:47.131 killing process with pid 3816675 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3816675 00:24:47.131 09:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3816675 00:24:47.131 { 00:24:47.131 "results": [ 00:24:47.131 { 00:24:47.131 "job": "Nvme0n1", 00:24:47.131 "core_mask": "0x4", 00:24:47.131 "workload": "verify", 00:24:47.131 "status": "terminated", 00:24:47.131 "verify_range": { 00:24:47.131 "start": 0, 00:24:47.131 "length": 16384 00:24:47.131 }, 00:24:47.131 "queue_depth": 128, 00:24:47.131 "io_size": 4096, 00:24:47.131 "runtime": 34.196146, 00:24:47.131 "iops": 8033.829309302867, 00:24:47.131 "mibps": 31.382145739464324, 00:24:47.131 "io_failed": 0, 00:24:47.131 "io_timeout": 0, 00:24:47.131 "avg_latency_us": 15904.784394266504, 00:24:47.131 "min_latency_us": 373.1911111111111, 00:24:47.131 "max_latency_us": 4076242.1096296296 00:24:47.131 } 00:24:47.131 ], 00:24:47.131 "core_count": 1 00:24:47.131 } 00:24:47.414 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3816675 00:24:47.414 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.414 [2024-11-20 09:57:48.071262] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:24:47.414 [2024-11-20 09:57:48.071381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3816675 ] 00:24:47.414 [2024-11-20 09:57:48.139629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.414 [2024-11-20 09:57:48.201782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.414 Running I/O for 90 seconds... 00:24:47.414 8374.00 IOPS, 32.71 MiB/s [2024-11-20T08:58:24.328Z] 8507.00 IOPS, 33.23 MiB/s [2024-11-20T08:58:24.328Z] 8507.00 IOPS, 33.23 MiB/s [2024-11-20T08:58:24.328Z] 8507.00 IOPS, 33.23 MiB/s [2024-11-20T08:58:24.328Z] 8533.00 IOPS, 33.33 MiB/s [2024-11-20T08:58:24.328Z] 8545.33 IOPS, 33.38 MiB/s [2024-11-20T08:58:24.328Z] 8543.86 IOPS, 33.37 MiB/s [2024-11-20T08:58:24.328Z] 8542.38 IOPS, 33.37 MiB/s [2024-11-20T08:58:24.328Z] 8547.56 IOPS, 33.39 MiB/s [2024-11-20T08:58:24.328Z] 8567.70 IOPS, 33.47 MiB/s [2024-11-20T08:58:24.328Z] 8562.64 IOPS, 33.45 MiB/s [2024-11-20T08:58:24.328Z] 8567.25 IOPS, 33.47 MiB/s [2024-11-20T08:58:24.328Z] 8568.92 IOPS, 33.47 MiB/s [2024-11-20T08:58:24.328Z] 8567.50 IOPS, 33.47 MiB/s [2024-11-20T08:58:24.328Z] 8562.27 IOPS, 33.45 MiB/s [2024-11-20T08:58:24.328Z] [2024-11-20 09:58:04.700244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.414 [2024-11-20 09:58:04.700312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.414 [2024-11-20 09:58:04.700365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.414 [2024-11-20 09:58:04.700384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.414 [2024-11-20 09:58:04.700408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.414 [2024-11-20 09:58:04.700426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.414 [2024-11-20 09:58:04.700449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.414 [2024-11-20 09:58:04.700465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.414 [2024-11-20 09:58:04.700487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.700503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.700526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.700543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.700565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.700581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.700604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.700621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.701968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.701990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.702005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.702044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.415 [2024-11-20 09:58:04.702098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.415 [2024-11-20 09:58:04.702135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.415 [2024-11-20 09:58:04.702176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.415 [2024-11-20 09:58:04.702231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.415 [2024-11-20 09:58:04.702270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.415 [2024-11-20 09:58:04.702316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.415 [2024-11-20 09:58:04.702356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.702395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.702796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.702842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.415 [2024-11-20 09:58:04.702881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.415 [2024-11-20 09:58:04.702903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.416 [2024-11-20 09:58:04.702919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.702940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.416 [2024-11-20 09:58:04.702957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.702979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.416 [2024-11-20 09:58:04.702995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.416 [2024-11-20 09:58:04.703037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.416 [2024-11-20 09:58:04.703101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.703963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.703979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.416 [2024-11-20 09:58:04.704465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.416 [2024-11-20 09:58:04.704486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.704503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.704549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.704590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.704628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.704666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.704705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.704743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.704782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.704822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.704860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.704900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.704954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.704977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.704994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.705594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.417 [2024-11-20 09:58:04.705640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.705680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.705718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.705763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.705805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.705858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.705897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.705934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.705971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.705993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.417 [2024-11-20 09:58:04.706616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.417 [2024-11-20 09:58:04.706633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.418 [2024-11-20 09:58:04.706670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.418 [2024-11-20 09:58:04.706708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.418 [2024-11-20 09:58:04.706745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.418 [2024-11-20 09:58:04.706781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.418 [2024-11-20 09:58:04.706819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.418 [2024-11-20 09:58:04.706856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.418 [2024-11-20 09:58:04.706892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.418 [2024-11-20 09:58:04.706929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.706966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.706987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.707984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.707999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.708020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.708040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.708062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.708078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.708099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.418 [2024-11-20 09:58:04.708114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.418 [2024-11-20 09:58:04.708135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.708151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.708172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.708187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.708208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.708223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.708244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.708260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.708281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.708296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.708344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.708362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.708384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.708400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.708423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.708439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.419 [2024-11-20 09:58:04.709536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.709966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.709987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.710003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.710024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.710040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.710061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.710076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.710097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.710113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.419 [2024-11-20 09:58:04.710135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.419 [2024-11-20 09:58:04.710150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.710976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.710992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.711029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.711065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.711102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.711139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.711180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.711217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.711268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.711316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.711356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.711969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.711992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.712019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.712037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.712059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.420 [2024-11-20 09:58:04.712076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.712098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.712114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.712136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.712151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.712174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.712190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.712213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.712232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.712256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.420 [2024-11-20 09:58:04.712278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.420 [2024-11-20 09:58:04.712309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.712977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.712998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.421 [2024-11-20 09:58:04.713336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.421 [2024-11-20 09:58:04.713804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.421 [2024-11-20 09:58:04.713843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.713859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.713880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.713896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.713918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.713933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.713954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.713970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.713991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.714572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.714610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.714665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.714706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.714744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.714781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.714803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.714819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.715565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.422 [2024-11-20 09:58:04.715946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.715967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.715982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.716003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.716019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.716040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.716055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.422 [2024-11-20 09:58:04.716076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.422 [2024-11-20 09:58:04.716091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.716967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.716987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.423 [2024-11-20 09:58:04.717425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.423 [2024-11-20 09:58:04.717441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.717463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.717479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.717501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.717518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.717556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.717579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.717595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.717633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.717649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.717670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.717701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.717725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.717741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.718351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.718397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.718440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.424 [2024-11-20 09:58:04.718480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.718972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.718988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.424 [2024-11-20 09:58:04.719545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.424 [2024-11-20 09:58:04.719567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.425 [2024-11-20 09:58:04.719598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.425 [2024-11-20 09:58:04.719637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.425 [2024-11-20 09:58:04.719673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.425 [2024-11-20 09:58:04.719710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.425 [2024-11-20 09:58:04.719761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.719797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.719887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.719930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.719968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.719990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.720969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.720985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.721006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.721021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.721042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.425 [2024-11-20 09:58:04.721057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.721078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.425 [2024-11-20 09:58:04.721094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.721115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.425 [2024-11-20 09:58:04.721130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.425 [2024-11-20 09:58:04.721152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.425 [2024-11-20 09:58:04.721167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.721188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.721204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.721226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.721242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.426 [2024-11-20 09:58:04.722503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.722972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.722988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.426 [2024-11-20 09:58:04.723531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.426 [2024-11-20 09:58:04.723547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.723976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.723996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.724011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.724032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.724046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.724081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.724097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.724119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.724135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.724155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.724190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.724214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.724230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.724252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.724269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.724911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.724935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.724963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.724981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.725019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.725065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.427 [2024-11-20 09:58:04.725104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.427 [2024-11-20 09:58:04.725773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.427 [2024-11-20 09:58:04.725793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.725808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.725829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.725843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.725868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.725884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.725904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.725919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.725940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.725955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.725975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.725991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.428 [2024-11-20 09:58:04.726416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.726972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.726992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.727007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.727027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.727042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.727062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.727077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.428 [2024-11-20 09:58:04.727097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.428 [2024-11-20 09:58:04.727112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.727644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.727697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.727732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.727772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.727795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.727811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.728590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.728641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.728681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.728719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.728759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.728797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.728852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.728891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.728944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.728965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.729018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.429 [2024-11-20 09:58:04.729060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.429 [2024-11-20 09:58:04.729432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.429 [2024-11-20 09:58:04.729454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.729949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.729980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.430 [2024-11-20 09:58:04.730708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.430 [2024-11-20 09:58:04.730743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.430 [2024-11-20 09:58:04.730797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.730821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.430 [2024-11-20 09:58:04.730852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.731436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.430 [2024-11-20 09:58:04.731460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.731487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.430 [2024-11-20 09:58:04.731506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.731535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.430 [2024-11-20 09:58:04.731553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.731575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.430 [2024-11-20 09:58:04.731591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.731613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.430 [2024-11-20 09:58:04.731634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.430 [2024-11-20 09:58:04.731657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.431 [2024-11-20 09:58:04.731674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.731696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.731727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.731750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.731766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.731805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.731821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.731843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.731865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.731887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.731904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.731925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.731941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.731963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.731979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.732972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.732987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.733008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.431 [2024-11-20 09:58:04.733039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.733061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.431 [2024-11-20 09:58:04.733077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.733114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.431 [2024-11-20 09:58:04.733131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.733153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.431 [2024-11-20 09:58:04.733174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.733196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.431 [2024-11-20 09:58:04.733216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.431 [2024-11-20 09:58:04.733240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.431 [2024-11-20 09:58:04.733256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.733983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.733998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.734018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.740206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.740266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.740333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.740379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.740420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.740458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.432 [2024-11-20 09:58:04.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.432 [2024-11-20 09:58:04.740535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.432 [2024-11-20 09:58:04.740573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.740596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.432 [2024-11-20 09:58:04.740627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.741460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.432 [2024-11-20 09:58:04.741486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.741514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.432 [2024-11-20 09:58:04.741533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.741556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.432 [2024-11-20 09:58:04.741574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.741602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.432 [2024-11-20 09:58:04.741619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.432 [2024-11-20 09:58:04.741642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.433 [2024-11-20 09:58:04.741976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.741996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.742967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.742987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.743003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.743023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.743039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.743059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.743074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.743094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.743109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.433 [2024-11-20 09:58:04.743130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.433 [2024-11-20 09:58:04.743149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.743560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.743599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.743622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.743642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.744219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.744264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.744311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.744359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.744398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.744436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.434 [2024-11-20 09:58:04.744474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.744968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.744990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.745005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.745026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.745042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.745062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.745077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.745097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.745112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.745133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.745147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.745172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.745187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.434 [2024-11-20 09:58:04.745207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.434 [2024-11-20 09:58:04.745223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.435 [2024-11-20 09:58:04.745740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.745793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.745839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.745878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.745915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.745953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.745974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.745991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.435 [2024-11-20 09:58:04.746774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.435 [2024-11-20 09:58:04.746789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.746809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.746824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.746845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.746860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.746881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.746896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.746916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.746931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.746951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.746967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.746987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.747002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.747023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.747039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.747798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.747822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.747855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.747874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.747897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.747918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.747944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.747960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.747982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.747999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.436 [2024-11-20 09:58:04.748408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.748984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.748999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.749020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.749035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.749056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.436 [2024-11-20 09:58:04.749072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.436 [2024-11-20 09:58:04.749108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.749950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.749981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.750018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.437 [2024-11-20 09:58:04.750958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.750979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.750995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.751016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.437 [2024-11-20 09:58:04.751031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.437 [2024-11-20 09:58:04.751051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.751976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.751996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.752011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.752046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.752082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.752117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.752153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.438 [2024-11-20 09:58:04.752188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.438 [2024-11-20 09:58:04.752564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.438 [2024-11-20 09:58:04.752580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.752977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.752997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.753437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-11-20 09:58:04.753474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-11-20 09:58:04.753806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-11-20 09:58:04.753879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-11-20 09:58:04.753924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-11-20 09:58:04.753967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.753994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-11-20 09:58:04.754010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-11-20 09:58:04.754053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.439 [2024-11-20 09:58:04.754470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.439 [2024-11-20 09:58:04.754495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-11-20 09:58:04.754511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.754959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.754976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.755983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.755998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.756023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.756038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.756062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.756078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.756103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-11-20 09:58:04.756118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.440 [2024-11-20 09:58:04.756143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-11-20 09:58:04.756159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:04.756329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:04.756352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.441 8028.25 IOPS, 31.36 MiB/s [2024-11-20T08:58:24.355Z] 7556.00 IOPS, 29.52 MiB/s [2024-11-20T08:58:24.355Z] 7136.22 IOPS, 27.88 MiB/s [2024-11-20T08:58:24.355Z] 6760.63 IOPS, 26.41 MiB/s [2024-11-20T08:58:24.355Z] 6830.25 IOPS, 26.68 MiB/s [2024-11-20T08:58:24.355Z] 6909.67 IOPS, 26.99 MiB/s [2024-11-20T08:58:24.355Z] 7021.36 IOPS, 27.43 MiB/s [2024-11-20T08:58:24.355Z] 7201.26 IOPS, 28.13 MiB/s [2024-11-20T08:58:24.355Z] 7376.33 IOPS, 28.81 MiB/s [2024-11-20T08:58:24.355Z] 7514.40 IOPS, 29.35 MiB/s [2024-11-20T08:58:24.355Z] 7553.62 IOPS, 29.51 MiB/s [2024-11-20T08:58:24.355Z] 7589.63 IOPS, 29.65 MiB/s [2024-11-20T08:58:24.355Z] 7622.25 IOPS, 29.77 MiB/s [2024-11-20T08:58:24.355Z] 7716.72 IOPS, 30.14 MiB/s [2024-11-20T08:58:24.355Z] 7833.23 IOPS, 30.60 MiB/s [2024-11-20T08:58:24.355Z] 7951.74 IOPS, 31.06 MiB/s [2024-11-20T08:58:24.355Z] [2024-11-20 09:58:21.213949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-11-20 09:58:21.214027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-11-20 09:58:21.214648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-11-20 09:58:21.214686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-11-20 09:58:21.214723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.214783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.214800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.215871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.215898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.215926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.215944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.215968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.215984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.216006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.216023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.216044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.216067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.216090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.216107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.216129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.216145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.216167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.216199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.216221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.216237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.216258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.216273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.441 [2024-11-20 09:58:21.216321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.441 [2024-11-20 09:58:21.216341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.216380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.216418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.216668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.216705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.216742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.216779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.216815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.216852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.216984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.216999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.217020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.217035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.217079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.217096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.217118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.217134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.217155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.217172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.217194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.217209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.217232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.217248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.217988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.442 [2024-11-20 09:58:21.218013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-11-20 09:58:21.218602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.442 [2024-11-20 09:58:21.218625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.218641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.218679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.218694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.218730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.218746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.218769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.218785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.218807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.218827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.218849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.218866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.218888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.218904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.218925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.218941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.218964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.218980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.219018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.219056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.219093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.219130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.219168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.219206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.219243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.219285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.219333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.219355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.219372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.220557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.220603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.220643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.220681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.220719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.220757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.220795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.220833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.220871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.220908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.220953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.220975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.220991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.221029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.221083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.221121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.443 [2024-11-20 09:58:21.221159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.221195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.221232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.221269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-11-20 09:58:21.221332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.443 [2024-11-20 09:58:21.221354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.221756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.221831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.221869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.221911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.221966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.221988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.222004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.222026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.222042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.224981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.225337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.225377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.225414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.225565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.444 [2024-11-20 09:58:21.225639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.225676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.225714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.225752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.444 [2024-11-20 09:58:21.225773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-11-20 09:58:21.225789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.225816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.225833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.225854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.225870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.225892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.225907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.225929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.225945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.225967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.225983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.226101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.226140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.226181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.226220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.226259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.226533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.226554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.227857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.227883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.227911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.227933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.227956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.227973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.227995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.228098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.228138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.228176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.228215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.228252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.445 [2024-11-20 09:58:21.228503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.228563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.228584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.445 [2024-11-20 09:58:21.229035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.445 [2024-11-20 09:58:21.229061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.229146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.229340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.229379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.229418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.229456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.229579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.229617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.229759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.229775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.230287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.230334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.230373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.230410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.230455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.230493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.230681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.446 [2024-11-20 09:58:21.230719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.446 [2024-11-20 09:58:21.230816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.446 [2024-11-20 09:58:21.230832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.230870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.230887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.230908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.230927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.230950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.230966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.230988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.231004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.232871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.232896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.232938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.232956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.232978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.232994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.233711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.233732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.233747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.236722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.236749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.236776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.236795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.236818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.236849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.236872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.236903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.236927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.236944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.236967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.236983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.237006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.237022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.237044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.237061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.237084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.447 [2024-11-20 09:58:21.237100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.237122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.237139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.237161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.447 [2024-11-20 09:58:21.237177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.447 [2024-11-20 09:58:21.237199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.237387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.237425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.237463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.237769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.237865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.237903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.237942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.237964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.237980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.238017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.238256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.238297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.238348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.238388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.238427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.238566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.238599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.239466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.239490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.239518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.448 [2024-11-20 09:58:21.239536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.239560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.239576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.239600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.239621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.448 [2024-11-20 09:58:21.239646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.448 [2024-11-20 09:58:21.239662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.239685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.239712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.239734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.239750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.239772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.239789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.239811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.239827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.239849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.239866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.239888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.239904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.239926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.239943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.240346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.240399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.240439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.240477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.240523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.240563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.240601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.240640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.240680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.240711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.240728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.241194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.241369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.241407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.241610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.241650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.241804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.449 [2024-11-20 09:58:21.241844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.241973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.241995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.449 [2024-11-20 09:58:21.242011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.449 [2024-11-20 09:58:21.242033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.242051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.242089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.242129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.242168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.242207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.242246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.242287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.242335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.242375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.242397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.242418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.244387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.244426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.244465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.244504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.244823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.244923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.244961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.244993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.245010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.245032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.450 [2024-11-20 09:58:21.245049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.245070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.245087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.245110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.245126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.245148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.245164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.245186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.245203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.245226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.450 [2024-11-20 09:58:21.245241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.450 [2024-11-20 09:58:21.245264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.245281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.245315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.245334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.247671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.247700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.247728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.247746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.247769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.247786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.247809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.247825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.247847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.247864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.247887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.247904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.247925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.247943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.247964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.247981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.451 [2024-11-20 09:58:21.248893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.248969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.248991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.249008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.249029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.249046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.249070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.249086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.249109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.249126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.249153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.451 [2024-11-20 09:58:21.249170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.451 [2024-11-20 09:58:21.249193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.249209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.249233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.249249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.251290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.251347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.251387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.251436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.251899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.251938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.251976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.251999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.252015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.252053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.252070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.252107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.252124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.252146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.252164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.253432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.253481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.253521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.253565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.253604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.253644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.253683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.253722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.452 [2024-11-20 09:58:21.253761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.253801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.253841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.253880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.253923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.253964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.253987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.254003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.452 [2024-11-20 09:58:21.254026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.452 [2024-11-20 09:58:21.254042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.254763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.254963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.254981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.255003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.255019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.255040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.255057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.255079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.255094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.256670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.256698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.256726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.453 [2024-11-20 09:58:21.256745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.256768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.256785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.256807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.256824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.256847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.256864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.256886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.256910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.256932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.256950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.256973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.256995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.257019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.257037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.257283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.257315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.257345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.257365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.257387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.257403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:47.453 [2024-11-20 09:58:21.257426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.453 [2024-11-20 09:58:21.257443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.257722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.257762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.257801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.257962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.257985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.258002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.258025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.258041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.258064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.258081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.259869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.259895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.259923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.259941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.259964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.259982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.260298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.260348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.260387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.454 [2024-11-20 09:58:21.260687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.260738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.260774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:47.454 [2024-11-20 09:58:21.260811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.454 [2024-11-20 09:58:21.260827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.260848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.260864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.260886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.260901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.260923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.260940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.260961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.260978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.261004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.261020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.261042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.261058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.261079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.261096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.261118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.261133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.261155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.261171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.261192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.261209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.262604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.262651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.262691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.262731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.262770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.262808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.262852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.262893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.262933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.262955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.262972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.263009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.263025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.264932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.264959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.264992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.265010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.265050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.455 [2024-11-20 09:58:21.265090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.265128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.265168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.265206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.265249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.265290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.265343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.265381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.455 [2024-11-20 09:58:21.265420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:47.455 [2024-11-20 09:58:21.265442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.456 [2024-11-20 09:58:21.265634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.456 [2024-11-20 09:58:21.265865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.456 [2024-11-20 09:58:21.265903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.265965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.265981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.456 [2024-11-20 09:58:21.266299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.456 [2024-11-20 09:58:21.266348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.456 [2024-11-20 09:58:21.266542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:47.456 [2024-11-20 09:58:21.266564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.266581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.266603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.266620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.266641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.266658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.266680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.266698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.267673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.267699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.267745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.267771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.267796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.267829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.267851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.267883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.267905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.267921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.267942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.267973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.267995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.457 [2024-11-20 09:58:21.268356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.268394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.268433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.268474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.268514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.268554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.268593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:47.457 [2024-11-20 09:58:21.268632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.457 [2024-11-20 09:58:21.268648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:47.457 8001.69 IOPS, 31.26 MiB/s [2024-11-20T08:58:24.371Z] 8020.52 IOPS, 31.33 MiB/s [2024-11-20T08:58:24.371Z] 8036.47 IOPS, 31.39 MiB/s [2024-11-20T08:58:24.371Z] Received shutdown signal, test time was about 34.196953 seconds 00:24:47.457 00:24:47.457 Latency(us) 00:24:47.457 [2024-11-20T08:58:24.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.457 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:47.457 Verification LBA range: start 0x0 length 0x4000 00:24:47.457 Nvme0n1 : 34.20 8033.83 31.38 0.00 0.00 15904.78 373.19 4076242.11 00:24:47.457 [2024-11-20T08:58:24.371Z] =================================================================================================================== 00:24:47.457 [2024-11-20T08:58:24.371Z] Total : 8033.83 31.38 0.00 0.00 15904.78 373.19 4076242.11 00:24:47.457 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.715 rmmod nvme_tcp 00:24:47.715 rmmod nvme_fabrics 00:24:47.715 rmmod nvme_keyring 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3816394 ']' 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3816394 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3816394 ']' 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3816394 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3816394 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3816394' 00:24:47.715 killing process with pid 3816394 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3816394 00:24:47.715 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3816394 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.975 09:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.942 09:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.942 00:24:49.942 real 0m43.440s 00:24:49.942 user 2m11.943s 00:24:49.942 sys 0m10.842s 00:24:49.942 09:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.942 09:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:49.942 ************************************ 00:24:49.942 END TEST nvmf_host_multipath_status 00:24:49.942 ************************************ 00:24:49.942 09:58:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:49.942 09:58:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.942 09:58:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.942 09:58:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.942 ************************************ 00:24:49.942 START TEST nvmf_discovery_remove_ifc 00:24:49.942 ************************************ 00:24:49.942 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:50.201 * Looking for test storage... 00:24:50.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.201 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:50.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.202 --rc genhtml_branch_coverage=1 00:24:50.202 --rc genhtml_function_coverage=1 00:24:50.202 --rc genhtml_legend=1 00:24:50.202 --rc geninfo_all_blocks=1 00:24:50.202 --rc geninfo_unexecuted_blocks=1 00:24:50.202 00:24:50.202 ' 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:50.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.202 --rc genhtml_branch_coverage=1 00:24:50.202 --rc genhtml_function_coverage=1 00:24:50.202 --rc genhtml_legend=1 00:24:50.202 --rc geninfo_all_blocks=1 00:24:50.202 --rc geninfo_unexecuted_blocks=1 00:24:50.202 00:24:50.202 ' 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:50.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.202 --rc genhtml_branch_coverage=1 00:24:50.202 --rc genhtml_function_coverage=1 00:24:50.202 --rc genhtml_legend=1 00:24:50.202 --rc geninfo_all_blocks=1 00:24:50.202 --rc geninfo_unexecuted_blocks=1 00:24:50.202 00:24:50.202 ' 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:50.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.202 --rc genhtml_branch_coverage=1 00:24:50.202 --rc genhtml_function_coverage=1 00:24:50.202 --rc genhtml_legend=1 00:24:50.202 --rc geninfo_all_blocks=1 00:24:50.202 --rc geninfo_unexecuted_blocks=1 00:24:50.202 00:24:50.202 ' 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.202 09:58:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.202 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:50.203 09:58:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:52.731 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.731 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:52.732 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:52.732 Found net devices under 0000:09:00.0: cvl_0_0 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:52.732 Found net devices under 0000:09:00.1: cvl_0_1 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:52.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:24:52.732 00:24:52.732 --- 10.0.0.2 ping statistics --- 00:24:52.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.732 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:52.732 00:24:52.732 --- 10.0.0.1 ping statistics --- 00:24:52.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.732 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.732 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3823146 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3823146 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3823146 ']' 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.733 [2024-11-20 09:58:29.369168] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:24:52.733 [2024-11-20 09:58:29.369254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.733 [2024-11-20 09:58:29.440779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.733 [2024-11-20 09:58:29.495778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.733 [2024-11-20 09:58:29.495849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.733 [2024-11-20 09:58:29.495876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.733 [2024-11-20 09:58:29.495888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.733 [2024-11-20 09:58:29.495898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.733 [2024-11-20 09:58:29.496538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.733 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.733 [2024-11-20 09:58:29.637164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.991 [2024-11-20 09:58:29.645380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:52.991 null0 00:24:52.991 [2024-11-20 09:58:29.677339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3823171 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3823171 /tmp/host.sock 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3823171 ']' 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:52.991 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.991 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.991 [2024-11-20 09:58:29.744779] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:24:52.991 [2024-11-20 09:58:29.744860] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3823171 ] 00:24:52.991 [2024-11-20 09:58:29.813261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.991 [2024-11-20 09:58:29.870246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.249 09:58:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.249 09:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.249 09:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:53.249 09:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.249 09:58:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.621 [2024-11-20 09:58:31.135951] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:54.622 [2024-11-20 09:58:31.135974] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:54.622 [2024-11-20 09:58:31.136001] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:54.622 [2024-11-20 09:58:31.262473] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:54.622 [2024-11-20 09:58:31.363222] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:54.622 [2024-11-20 09:58:31.364174] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c2ebe0:1 started. 00:24:54.622 [2024-11-20 09:58:31.365775] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:54.622 [2024-11-20 09:58:31.365826] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:54.622 [2024-11-20 09:58:31.365860] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:54.622 [2024-11-20 09:58:31.365880] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:54.622 [2024-11-20 09:58:31.365901] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.622 [2024-11-20 09:58:31.373351] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c2ebe0 was disconnected and freed. delete nvme_qpair. 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.622 09:58:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.994 09:58:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.927 09:58:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:57.860 09:58:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:58.792 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.792 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.792 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.793 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.793 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.793 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.793 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.793 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.793 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:58.793 09:58:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:00.164 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.164 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.164 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.164 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.164 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.164 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.164 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.165 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.165 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:00.165 09:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:00.165 [2024-11-20 09:58:36.807497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:00.165 [2024-11-20 09:58:36.807558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.165 [2024-11-20 09:58:36.807595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.165 [2024-11-20 09:58:36.807613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.165 [2024-11-20 09:58:36.807626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.165 [2024-11-20 09:58:36.807638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.165 [2024-11-20 09:58:36.807651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.165 [2024-11-20 09:58:36.807665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.165 [2024-11-20 09:58:36.807683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.165 [2024-11-20 09:58:36.807697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.165 [2024-11-20 09:58:36.807709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.165 [2024-11-20 09:58:36.807721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b400 is same with the state(6) to be set 00:25:00.165 [2024-11-20 09:58:36.817516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0b400 (9): Bad file descriptor 00:25:00.165 [2024-11-20 09:58:36.827566] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:00.165 [2024-11-20 09:58:36.827603] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:00.165 [2024-11-20 09:58:36.827622] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:00.165 [2024-11-20 09:58:36.827631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:00.165 [2024-11-20 09:58:36.827671] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:01.098 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.098 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.098 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.098 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.098 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.098 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.098 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.098 [2024-11-20 09:58:37.846339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:01.098 [2024-11-20 09:58:37.846413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0b400 with addr=10.0.0.2, port=4420 00:25:01.098 [2024-11-20 09:58:37.846441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0b400 is same with the state(6) to be set 00:25:01.098 [2024-11-20 09:58:37.846488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0b400 (9): Bad file descriptor 00:25:01.098 [2024-11-20 09:58:37.846900] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:01.098 [2024-11-20 09:58:37.846945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:01.098 [2024-11-20 09:58:37.846962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:01.098 [2024-11-20 09:58:37.846979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:01.099 [2024-11-20 09:58:37.846991] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:01.099 [2024-11-20 09:58:37.847001] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:01.099 [2024-11-20 09:58:37.847008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:01.099 [2024-11-20 09:58:37.847021] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:01.099 [2024-11-20 09:58:37.847030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:01.099 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.099 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:01.099 09:58:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.033 [2024-11-20 09:58:38.849528] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:02.033 [2024-11-20 09:58:38.849575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:02.033 [2024-11-20 09:58:38.849598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:02.033 [2024-11-20 09:58:38.849627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:02.033 [2024-11-20 09:58:38.849641] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:02.033 [2024-11-20 09:58:38.849654] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:02.033 [2024-11-20 09:58:38.849678] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:02.033 [2024-11-20 09:58:38.849687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:02.033 [2024-11-20 09:58:38.849723] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:02.033 [2024-11-20 09:58:38.849769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.033 [2024-11-20 09:58:38.849790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.033 [2024-11-20 09:58:38.849808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.033 [2024-11-20 09:58:38.849821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.033 [2024-11-20 09:58:38.849834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.033 [2024-11-20 09:58:38.849847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.033 [2024-11-20 09:58:38.849860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.033 [2024-11-20 09:58:38.849872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.033 [2024-11-20 09:58:38.849885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.033 [2024-11-20 09:58:38.849897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.033 [2024-11-20 09:58:38.849910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:02.033 [2024-11-20 09:58:38.849991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfab40 (9): Bad file descriptor 00:25:02.033 [2024-11-20 09:58:38.851030] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:02.033 [2024-11-20 09:58:38.851052] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:02.033 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:02.291 09:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:03.223 09:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.223 09:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.223 09:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.223 09:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.223 09:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.223 09:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.223 09:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.223 09:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.223 09:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:03.223 09:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.156 [2024-11-20 09:58:40.900463] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:04.156 [2024-11-20 09:58:40.900496] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:04.156 [2024-11-20 09:58:40.900520] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:04.156 [2024-11-20 09:58:40.986811] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:04.156 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:04.156 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.156 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:04.156 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.156 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.156 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:04.156 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:04.156 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.414 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:04.414 09:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.414 [2024-11-20 09:58:41.081674] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:04.414 [2024-11-20 09:58:41.082511] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1c15be0:1 started. 00:25:04.414 [2024-11-20 09:58:41.083911] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:04.414 [2024-11-20 09:58:41.083952] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:04.414 [2024-11-20 09:58:41.083983] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:04.414 [2024-11-20 09:58:41.084002] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:04.414 [2024-11-20 09:58:41.084015] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:04.414 [2024-11-20 09:58:41.089207] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1c15be0 was disconnected and freed. delete nvme_qpair. 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3823171 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3823171 ']' 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3823171 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3823171 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3823171' 00:25:05.352 killing process with pid 3823171 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3823171 00:25:05.352 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3823171 00:25:05.609 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:05.609 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:05.610 rmmod nvme_tcp 00:25:05.610 rmmod nvme_fabrics 00:25:05.610 rmmod nvme_keyring 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3823146 ']' 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3823146 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3823146 ']' 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3823146 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3823146 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3823146' 00:25:05.610 killing process with pid 3823146 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3823146 00:25:05.610 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3823146 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.870 09:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.403 00:25:08.403 real 0m17.871s 00:25:08.403 user 0m25.824s 00:25:08.403 sys 0m3.094s 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.403 ************************************ 00:25:08.403 END TEST nvmf_discovery_remove_ifc 00:25:08.403 ************************************ 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.403 ************************************ 00:25:08.403 START TEST nvmf_identify_kernel_target 00:25:08.403 ************************************ 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:08.403 * Looking for test storage... 00:25:08.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.403 --rc genhtml_branch_coverage=1 00:25:08.403 --rc genhtml_function_coverage=1 00:25:08.403 --rc genhtml_legend=1 00:25:08.403 --rc geninfo_all_blocks=1 00:25:08.403 --rc geninfo_unexecuted_blocks=1 00:25:08.403 00:25:08.403 ' 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.403 --rc genhtml_branch_coverage=1 00:25:08.403 --rc genhtml_function_coverage=1 00:25:08.403 --rc genhtml_legend=1 00:25:08.403 --rc geninfo_all_blocks=1 00:25:08.403 --rc geninfo_unexecuted_blocks=1 00:25:08.403 00:25:08.403 ' 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.403 --rc genhtml_branch_coverage=1 00:25:08.403 --rc genhtml_function_coverage=1 00:25:08.403 --rc genhtml_legend=1 00:25:08.403 --rc geninfo_all_blocks=1 00:25:08.403 --rc geninfo_unexecuted_blocks=1 00:25:08.403 00:25:08.403 ' 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:08.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.403 --rc genhtml_branch_coverage=1 00:25:08.403 --rc genhtml_function_coverage=1 00:25:08.403 --rc genhtml_legend=1 00:25:08.403 --rc geninfo_all_blocks=1 00:25:08.403 --rc geninfo_unexecuted_blocks=1 00:25:08.403 00:25:08.403 ' 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.403 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.404 09:58:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:10.307 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.307 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:10.308 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:10.308 Found net devices under 0000:09:00.0: cvl_0_0 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:10.308 Found net devices under 0000:09:00.1: cvl_0_1 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:10.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:10.308 00:25:10.308 --- 10.0.0.2 ping statistics --- 00:25:10.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.308 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:25:10.308 00:25:10.308 --- 10.0.0.1 ping statistics --- 00:25:10.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.308 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:10.308 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:10.566 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:10.566 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:10.566 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:10.566 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.566 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.566 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.566 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.566 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:10.567 09:58:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:11.505 Waiting for block devices as requested 00:25:11.505 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:11.765 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:11.765 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:11.765 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:12.025 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:12.025 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:12.025 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:12.025 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:12.286 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:25:12.286 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:12.286 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:12.544 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:12.544 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:12.544 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:12.801 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:12.801 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:12.801 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:13.059 No valid GPT data, bailing 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:13.059 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:25:13.060 00:25:13.060 Discovery Log Number of Records 2, Generation counter 2 00:25:13.060 =====Discovery Log Entry 0====== 00:25:13.060 trtype: tcp 00:25:13.060 adrfam: ipv4 00:25:13.060 subtype: current discovery subsystem 00:25:13.060 treq: not specified, sq flow control disable supported 00:25:13.060 portid: 1 00:25:13.060 trsvcid: 4420 00:25:13.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:13.060 traddr: 10.0.0.1 00:25:13.060 eflags: none 00:25:13.060 sectype: none 00:25:13.060 =====Discovery Log Entry 1====== 00:25:13.060 trtype: tcp 00:25:13.060 adrfam: ipv4 00:25:13.060 subtype: nvme subsystem 00:25:13.060 treq: not specified, sq flow control disable supported 00:25:13.060 portid: 1 00:25:13.060 trsvcid: 4420 00:25:13.060 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:13.060 traddr: 10.0.0.1 00:25:13.060 eflags: none 00:25:13.060 sectype: none 00:25:13.060 09:58:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:13.060 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:13.319 ===================================================== 00:25:13.319 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:13.319 ===================================================== 00:25:13.319 Controller Capabilities/Features 00:25:13.319 ================================ 00:25:13.319 Vendor ID: 0000 00:25:13.319 Subsystem Vendor ID: 0000 00:25:13.319 Serial Number: 1719f6b3726d8c19cba1 00:25:13.319 Model Number: Linux 00:25:13.319 Firmware Version: 6.8.9-20 00:25:13.319 Recommended Arb Burst: 0 00:25:13.319 IEEE OUI Identifier: 00 00 00 00:25:13.319 Multi-path I/O 00:25:13.319 May have multiple subsystem ports: No 00:25:13.319 May have multiple controllers: No 00:25:13.319 Associated with SR-IOV VF: No 00:25:13.319 Max Data Transfer Size: Unlimited 00:25:13.319 Max Number of Namespaces: 0 00:25:13.319 Max Number of I/O Queues: 1024 00:25:13.319 NVMe Specification Version (VS): 1.3 00:25:13.319 NVMe Specification Version (Identify): 1.3 00:25:13.319 Maximum Queue Entries: 1024 00:25:13.319 Contiguous Queues Required: No 00:25:13.319 Arbitration Mechanisms Supported 00:25:13.319 Weighted Round Robin: Not Supported 00:25:13.319 Vendor Specific: Not Supported 00:25:13.319 Reset Timeout: 7500 ms 00:25:13.319 Doorbell Stride: 4 bytes 00:25:13.319 NVM Subsystem Reset: Not Supported 00:25:13.319 Command Sets Supported 00:25:13.319 NVM Command Set: Supported 00:25:13.319 Boot Partition: Not Supported 00:25:13.319 Memory Page Size Minimum: 4096 bytes 00:25:13.319 Memory Page Size Maximum: 4096 bytes 00:25:13.319 Persistent Memory Region: Not Supported 00:25:13.319 Optional Asynchronous Events Supported 00:25:13.319 Namespace Attribute Notices: Not Supported 00:25:13.319 Firmware Activation Notices: Not Supported 00:25:13.319 ANA Change Notices: Not Supported 00:25:13.319 PLE Aggregate Log Change Notices: Not Supported 00:25:13.319 LBA Status Info Alert Notices: Not Supported 00:25:13.319 EGE Aggregate Log Change Notices: Not Supported 00:25:13.319 Normal NVM Subsystem Shutdown event: Not Supported 00:25:13.319 Zone Descriptor Change Notices: Not Supported 00:25:13.319 Discovery Log Change Notices: Supported 00:25:13.319 Controller Attributes 00:25:13.319 128-bit Host Identifier: Not Supported 00:25:13.319 Non-Operational Permissive Mode: Not Supported 00:25:13.319 NVM Sets: Not Supported 00:25:13.319 Read Recovery Levels: Not Supported 00:25:13.319 Endurance Groups: Not Supported 00:25:13.319 Predictable Latency Mode: Not Supported 00:25:13.319 Traffic Based Keep ALive: Not Supported 00:25:13.319 Namespace Granularity: Not Supported 00:25:13.319 SQ Associations: Not Supported 00:25:13.319 UUID List: Not Supported 00:25:13.319 Multi-Domain Subsystem: Not Supported 00:25:13.319 Fixed Capacity Management: Not Supported 00:25:13.319 Variable Capacity Management: Not Supported 00:25:13.319 Delete Endurance Group: Not Supported 00:25:13.319 Delete NVM Set: Not Supported 00:25:13.319 Extended LBA Formats Supported: Not Supported 00:25:13.319 Flexible Data Placement Supported: Not Supported 00:25:13.319 00:25:13.319 Controller Memory Buffer Support 00:25:13.319 ================================ 00:25:13.319 Supported: No 00:25:13.319 00:25:13.319 Persistent Memory Region Support 00:25:13.319 ================================ 00:25:13.319 Supported: No 00:25:13.319 00:25:13.319 Admin Command Set Attributes 00:25:13.319 ============================ 00:25:13.319 Security Send/Receive: Not Supported 00:25:13.319 Format NVM: Not Supported 00:25:13.319 Firmware Activate/Download: Not Supported 00:25:13.319 Namespace Management: Not Supported 00:25:13.319 Device Self-Test: Not Supported 00:25:13.319 Directives: Not Supported 00:25:13.319 NVMe-MI: Not Supported 00:25:13.319 Virtualization Management: Not Supported 00:25:13.319 Doorbell Buffer Config: Not Supported 00:25:13.319 Get LBA Status Capability: Not Supported 00:25:13.319 Command & Feature Lockdown Capability: Not Supported 00:25:13.320 Abort Command Limit: 1 00:25:13.320 Async Event Request Limit: 1 00:25:13.320 Number of Firmware Slots: N/A 00:25:13.320 Firmware Slot 1 Read-Only: N/A 00:25:13.320 Firmware Activation Without Reset: N/A 00:25:13.320 Multiple Update Detection Support: N/A 00:25:13.320 Firmware Update Granularity: No Information Provided 00:25:13.320 Per-Namespace SMART Log: No 00:25:13.320 Asymmetric Namespace Access Log Page: Not Supported 00:25:13.320 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:13.320 Command Effects Log Page: Not Supported 00:25:13.320 Get Log Page Extended Data: Supported 00:25:13.320 Telemetry Log Pages: Not Supported 00:25:13.320 Persistent Event Log Pages: Not Supported 00:25:13.320 Supported Log Pages Log Page: May Support 00:25:13.320 Commands Supported & Effects Log Page: Not Supported 00:25:13.320 Feature Identifiers & Effects Log Page:May Support 00:25:13.320 NVMe-MI Commands & Effects Log Page: May Support 00:25:13.320 Data Area 4 for Telemetry Log: Not Supported 00:25:13.320 Error Log Page Entries Supported: 1 00:25:13.320 Keep Alive: Not Supported 00:25:13.320 00:25:13.320 NVM Command Set Attributes 00:25:13.320 ========================== 00:25:13.320 Submission Queue Entry Size 00:25:13.320 Max: 1 00:25:13.320 Min: 1 00:25:13.320 Completion Queue Entry Size 00:25:13.320 Max: 1 00:25:13.320 Min: 1 00:25:13.320 Number of Namespaces: 0 00:25:13.320 Compare Command: Not Supported 00:25:13.320 Write Uncorrectable Command: Not Supported 00:25:13.320 Dataset Management Command: Not Supported 00:25:13.320 Write Zeroes Command: Not Supported 00:25:13.320 Set Features Save Field: Not Supported 00:25:13.320 Reservations: Not Supported 00:25:13.320 Timestamp: Not Supported 00:25:13.320 Copy: Not Supported 00:25:13.320 Volatile Write Cache: Not Present 00:25:13.320 Atomic Write Unit (Normal): 1 00:25:13.320 Atomic Write Unit (PFail): 1 00:25:13.320 Atomic Compare & Write Unit: 1 00:25:13.320 Fused Compare & Write: Not Supported 00:25:13.320 Scatter-Gather List 00:25:13.320 SGL Command Set: Supported 00:25:13.320 SGL Keyed: Not Supported 00:25:13.320 SGL Bit Bucket Descriptor: Not Supported 00:25:13.320 SGL Metadata Pointer: Not Supported 00:25:13.320 Oversized SGL: Not Supported 00:25:13.320 SGL Metadata Address: Not Supported 00:25:13.320 SGL Offset: Supported 00:25:13.320 Transport SGL Data Block: Not Supported 00:25:13.320 Replay Protected Memory Block: Not Supported 00:25:13.320 00:25:13.320 Firmware Slot Information 00:25:13.320 ========================= 00:25:13.320 Active slot: 0 00:25:13.320 00:25:13.320 00:25:13.320 Error Log 00:25:13.320 ========= 00:25:13.320 00:25:13.320 Active Namespaces 00:25:13.320 ================= 00:25:13.320 Discovery Log Page 00:25:13.320 ================== 00:25:13.320 Generation Counter: 2 00:25:13.320 Number of Records: 2 00:25:13.320 Record Format: 0 00:25:13.320 00:25:13.320 Discovery Log Entry 0 00:25:13.320 ---------------------- 00:25:13.320 Transport Type: 3 (TCP) 00:25:13.320 Address Family: 1 (IPv4) 00:25:13.320 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:13.320 Entry Flags: 00:25:13.320 Duplicate Returned Information: 0 00:25:13.320 Explicit Persistent Connection Support for Discovery: 0 00:25:13.320 Transport Requirements: 00:25:13.320 Secure Channel: Not Specified 00:25:13.320 Port ID: 1 (0x0001) 00:25:13.320 Controller ID: 65535 (0xffff) 00:25:13.320 Admin Max SQ Size: 32 00:25:13.320 Transport Service Identifier: 4420 00:25:13.320 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:13.320 Transport Address: 10.0.0.1 00:25:13.320 Discovery Log Entry 1 00:25:13.320 ---------------------- 00:25:13.320 Transport Type: 3 (TCP) 00:25:13.320 Address Family: 1 (IPv4) 00:25:13.320 Subsystem Type: 2 (NVM Subsystem) 00:25:13.320 Entry Flags: 00:25:13.320 Duplicate Returned Information: 0 00:25:13.320 Explicit Persistent Connection Support for Discovery: 0 00:25:13.320 Transport Requirements: 00:25:13.320 Secure Channel: Not Specified 00:25:13.320 Port ID: 1 (0x0001) 00:25:13.320 Controller ID: 65535 (0xffff) 00:25:13.320 Admin Max SQ Size: 32 00:25:13.320 Transport Service Identifier: 4420 00:25:13.320 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:13.320 Transport Address: 10.0.0.1 00:25:13.320 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:13.320 get_feature(0x01) failed 00:25:13.320 get_feature(0x02) failed 00:25:13.320 get_feature(0x04) failed 00:25:13.320 ===================================================== 00:25:13.320 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:13.320 ===================================================== 00:25:13.320 Controller Capabilities/Features 00:25:13.320 ================================ 00:25:13.320 Vendor ID: 0000 00:25:13.320 Subsystem Vendor ID: 0000 00:25:13.320 Serial Number: 71d857f6808098a74264 00:25:13.320 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:13.320 Firmware Version: 6.8.9-20 00:25:13.320 Recommended Arb Burst: 6 00:25:13.320 IEEE OUI Identifier: 00 00 00 00:25:13.320 Multi-path I/O 00:25:13.320 May have multiple subsystem ports: Yes 00:25:13.320 May have multiple controllers: Yes 00:25:13.320 Associated with SR-IOV VF: No 00:25:13.320 Max Data Transfer Size: Unlimited 00:25:13.320 Max Number of Namespaces: 1024 00:25:13.320 Max Number of I/O Queues: 128 00:25:13.320 NVMe Specification Version (VS): 1.3 00:25:13.320 NVMe Specification Version (Identify): 1.3 00:25:13.320 Maximum Queue Entries: 1024 00:25:13.320 Contiguous Queues Required: No 00:25:13.320 Arbitration Mechanisms Supported 00:25:13.320 Weighted Round Robin: Not Supported 00:25:13.320 Vendor Specific: Not Supported 00:25:13.320 Reset Timeout: 7500 ms 00:25:13.320 Doorbell Stride: 4 bytes 00:25:13.320 NVM Subsystem Reset: Not Supported 00:25:13.320 Command Sets Supported 00:25:13.320 NVM Command Set: Supported 00:25:13.320 Boot Partition: Not Supported 00:25:13.320 Memory Page Size Minimum: 4096 bytes 00:25:13.320 Memory Page Size Maximum: 4096 bytes 00:25:13.320 Persistent Memory Region: Not Supported 00:25:13.320 Optional Asynchronous Events Supported 00:25:13.320 Namespace Attribute Notices: Supported 00:25:13.320 Firmware Activation Notices: Not Supported 00:25:13.320 ANA Change Notices: Supported 00:25:13.320 PLE Aggregate Log Change Notices: Not Supported 00:25:13.320 LBA Status Info Alert Notices: Not Supported 00:25:13.320 EGE Aggregate Log Change Notices: Not Supported 00:25:13.320 Normal NVM Subsystem Shutdown event: Not Supported 00:25:13.320 Zone Descriptor Change Notices: Not Supported 00:25:13.320 Discovery Log Change Notices: Not Supported 00:25:13.320 Controller Attributes 00:25:13.320 128-bit Host Identifier: Supported 00:25:13.320 Non-Operational Permissive Mode: Not Supported 00:25:13.320 NVM Sets: Not Supported 00:25:13.320 Read Recovery Levels: Not Supported 00:25:13.320 Endurance Groups: Not Supported 00:25:13.320 Predictable Latency Mode: Not Supported 00:25:13.320 Traffic Based Keep ALive: Supported 00:25:13.320 Namespace Granularity: Not Supported 00:25:13.320 SQ Associations: Not Supported 00:25:13.320 UUID List: Not Supported 00:25:13.320 Multi-Domain Subsystem: Not Supported 00:25:13.320 Fixed Capacity Management: Not Supported 00:25:13.320 Variable Capacity Management: Not Supported 00:25:13.320 Delete Endurance Group: Not Supported 00:25:13.320 Delete NVM Set: Not Supported 00:25:13.320 Extended LBA Formats Supported: Not Supported 00:25:13.320 Flexible Data Placement Supported: Not Supported 00:25:13.320 00:25:13.320 Controller Memory Buffer Support 00:25:13.320 ================================ 00:25:13.320 Supported: No 00:25:13.320 00:25:13.320 Persistent Memory Region Support 00:25:13.320 ================================ 00:25:13.320 Supported: No 00:25:13.320 00:25:13.320 Admin Command Set Attributes 00:25:13.320 ============================ 00:25:13.320 Security Send/Receive: Not Supported 00:25:13.320 Format NVM: Not Supported 00:25:13.320 Firmware Activate/Download: Not Supported 00:25:13.320 Namespace Management: Not Supported 00:25:13.320 Device Self-Test: Not Supported 00:25:13.320 Directives: Not Supported 00:25:13.320 NVMe-MI: Not Supported 00:25:13.320 Virtualization Management: Not Supported 00:25:13.320 Doorbell Buffer Config: Not Supported 00:25:13.320 Get LBA Status Capability: Not Supported 00:25:13.320 Command & Feature Lockdown Capability: Not Supported 00:25:13.321 Abort Command Limit: 4 00:25:13.321 Async Event Request Limit: 4 00:25:13.321 Number of Firmware Slots: N/A 00:25:13.321 Firmware Slot 1 Read-Only: N/A 00:25:13.321 Firmware Activation Without Reset: N/A 00:25:13.321 Multiple Update Detection Support: N/A 00:25:13.321 Firmware Update Granularity: No Information Provided 00:25:13.321 Per-Namespace SMART Log: Yes 00:25:13.321 Asymmetric Namespace Access Log Page: Supported 00:25:13.321 ANA Transition Time : 10 sec 00:25:13.321 00:25:13.321 Asymmetric Namespace Access Capabilities 00:25:13.321 ANA Optimized State : Supported 00:25:13.321 ANA Non-Optimized State : Supported 00:25:13.321 ANA Inaccessible State : Supported 00:25:13.321 ANA Persistent Loss State : Supported 00:25:13.321 ANA Change State : Supported 00:25:13.321 ANAGRPID is not changed : No 00:25:13.321 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:13.321 00:25:13.321 ANA Group Identifier Maximum : 128 00:25:13.321 Number of ANA Group Identifiers : 128 00:25:13.321 Max Number of Allowed Namespaces : 1024 00:25:13.321 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:13.321 Command Effects Log Page: Supported 00:25:13.321 Get Log Page Extended Data: Supported 00:25:13.321 Telemetry Log Pages: Not Supported 00:25:13.321 Persistent Event Log Pages: Not Supported 00:25:13.321 Supported Log Pages Log Page: May Support 00:25:13.321 Commands Supported & Effects Log Page: Not Supported 00:25:13.321 Feature Identifiers & Effects Log Page:May Support 00:25:13.321 NVMe-MI Commands & Effects Log Page: May Support 00:25:13.321 Data Area 4 for Telemetry Log: Not Supported 00:25:13.321 Error Log Page Entries Supported: 128 00:25:13.321 Keep Alive: Supported 00:25:13.321 Keep Alive Granularity: 1000 ms 00:25:13.321 00:25:13.321 NVM Command Set Attributes 00:25:13.321 ========================== 00:25:13.321 Submission Queue Entry Size 00:25:13.321 Max: 64 00:25:13.321 Min: 64 00:25:13.321 Completion Queue Entry Size 00:25:13.321 Max: 16 00:25:13.321 Min: 16 00:25:13.321 Number of Namespaces: 1024 00:25:13.321 Compare Command: Not Supported 00:25:13.321 Write Uncorrectable Command: Not Supported 00:25:13.321 Dataset Management Command: Supported 00:25:13.321 Write Zeroes Command: Supported 00:25:13.321 Set Features Save Field: Not Supported 00:25:13.321 Reservations: Not Supported 00:25:13.321 Timestamp: Not Supported 00:25:13.321 Copy: Not Supported 00:25:13.321 Volatile Write Cache: Present 00:25:13.321 Atomic Write Unit (Normal): 1 00:25:13.321 Atomic Write Unit (PFail): 1 00:25:13.321 Atomic Compare & Write Unit: 1 00:25:13.321 Fused Compare & Write: Not Supported 00:25:13.321 Scatter-Gather List 00:25:13.321 SGL Command Set: Supported 00:25:13.321 SGL Keyed: Not Supported 00:25:13.321 SGL Bit Bucket Descriptor: Not Supported 00:25:13.321 SGL Metadata Pointer: Not Supported 00:25:13.321 Oversized SGL: Not Supported 00:25:13.321 SGL Metadata Address: Not Supported 00:25:13.321 SGL Offset: Supported 00:25:13.321 Transport SGL Data Block: Not Supported 00:25:13.321 Replay Protected Memory Block: Not Supported 00:25:13.321 00:25:13.321 Firmware Slot Information 00:25:13.321 ========================= 00:25:13.321 Active slot: 0 00:25:13.321 00:25:13.321 Asymmetric Namespace Access 00:25:13.321 =========================== 00:25:13.321 Change Count : 0 00:25:13.321 Number of ANA Group Descriptors : 1 00:25:13.321 ANA Group Descriptor : 0 00:25:13.321 ANA Group ID : 1 00:25:13.321 Number of NSID Values : 1 00:25:13.321 Change Count : 0 00:25:13.321 ANA State : 1 00:25:13.321 Namespace Identifier : 1 00:25:13.321 00:25:13.321 Commands Supported and Effects 00:25:13.321 ============================== 00:25:13.321 Admin Commands 00:25:13.321 -------------- 00:25:13.321 Get Log Page (02h): Supported 00:25:13.321 Identify (06h): Supported 00:25:13.321 Abort (08h): Supported 00:25:13.321 Set Features (09h): Supported 00:25:13.321 Get Features (0Ah): Supported 00:25:13.321 Asynchronous Event Request (0Ch): Supported 00:25:13.321 Keep Alive (18h): Supported 00:25:13.321 I/O Commands 00:25:13.321 ------------ 00:25:13.321 Flush (00h): Supported 00:25:13.321 Write (01h): Supported LBA-Change 00:25:13.321 Read (02h): Supported 00:25:13.321 Write Zeroes (08h): Supported LBA-Change 00:25:13.321 Dataset Management (09h): Supported 00:25:13.321 00:25:13.321 Error Log 00:25:13.321 ========= 00:25:13.321 Entry: 0 00:25:13.321 Error Count: 0x3 00:25:13.321 Submission Queue Id: 0x0 00:25:13.321 Command Id: 0x5 00:25:13.321 Phase Bit: 0 00:25:13.321 Status Code: 0x2 00:25:13.321 Status Code Type: 0x0 00:25:13.321 Do Not Retry: 1 00:25:13.321 Error Location: 0x28 00:25:13.321 LBA: 0x0 00:25:13.321 Namespace: 0x0 00:25:13.321 Vendor Log Page: 0x0 00:25:13.321 ----------- 00:25:13.321 Entry: 1 00:25:13.321 Error Count: 0x2 00:25:13.321 Submission Queue Id: 0x0 00:25:13.321 Command Id: 0x5 00:25:13.321 Phase Bit: 0 00:25:13.321 Status Code: 0x2 00:25:13.321 Status Code Type: 0x0 00:25:13.321 Do Not Retry: 1 00:25:13.321 Error Location: 0x28 00:25:13.321 LBA: 0x0 00:25:13.321 Namespace: 0x0 00:25:13.321 Vendor Log Page: 0x0 00:25:13.321 ----------- 00:25:13.321 Entry: 2 00:25:13.321 Error Count: 0x1 00:25:13.321 Submission Queue Id: 0x0 00:25:13.321 Command Id: 0x4 00:25:13.321 Phase Bit: 0 00:25:13.321 Status Code: 0x2 00:25:13.321 Status Code Type: 0x0 00:25:13.321 Do Not Retry: 1 00:25:13.321 Error Location: 0x28 00:25:13.321 LBA: 0x0 00:25:13.321 Namespace: 0x0 00:25:13.321 Vendor Log Page: 0x0 00:25:13.321 00:25:13.321 Number of Queues 00:25:13.321 ================ 00:25:13.321 Number of I/O Submission Queues: 128 00:25:13.321 Number of I/O Completion Queues: 128 00:25:13.321 00:25:13.321 ZNS Specific Controller Data 00:25:13.321 ============================ 00:25:13.321 Zone Append Size Limit: 0 00:25:13.321 00:25:13.321 00:25:13.321 Active Namespaces 00:25:13.321 ================= 00:25:13.321 get_feature(0x05) failed 00:25:13.321 Namespace ID:1 00:25:13.321 Command Set Identifier: NVM (00h) 00:25:13.321 Deallocate: Supported 00:25:13.321 Deallocated/Unwritten Error: Not Supported 00:25:13.321 Deallocated Read Value: Unknown 00:25:13.321 Deallocate in Write Zeroes: Not Supported 00:25:13.321 Deallocated Guard Field: 0xFFFF 00:25:13.321 Flush: Supported 00:25:13.321 Reservation: Not Supported 00:25:13.321 Namespace Sharing Capabilities: Multiple Controllers 00:25:13.321 Size (in LBAs): 1953525168 (931GiB) 00:25:13.321 Capacity (in LBAs): 1953525168 (931GiB) 00:25:13.321 Utilization (in LBAs): 1953525168 (931GiB) 00:25:13.321 UUID: 6b3a272d-a41f-4b56-8e08-27c20819be1d 00:25:13.321 Thin Provisioning: Not Supported 00:25:13.321 Per-NS Atomic Units: Yes 00:25:13.321 Atomic Boundary Size (Normal): 0 00:25:13.321 Atomic Boundary Size (PFail): 0 00:25:13.321 Atomic Boundary Offset: 0 00:25:13.321 NGUID/EUI64 Never Reused: No 00:25:13.321 ANA group ID: 1 00:25:13.321 Namespace Write Protected: No 00:25:13.321 Number of LBA Formats: 1 00:25:13.321 Current LBA Format: LBA Format #00 00:25:13.321 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:13.321 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.321 rmmod nvme_tcp 00:25:13.321 rmmod nvme_fabrics 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:13.321 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.322 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.322 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.322 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.322 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.322 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.322 09:58:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:15.859 09:58:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:16.795 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:16.795 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:16.795 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:16.795 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:16.795 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:16.796 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:16.796 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:16.796 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:16.796 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:16.796 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:16.796 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:16.796 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:16.796 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:16.796 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:16.796 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:16.796 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:17.729 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:25:17.987 00:25:17.987 real 0m9.974s 00:25:17.987 user 0m2.101s 00:25:17.987 sys 0m3.796s 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.987 ************************************ 00:25:17.987 END TEST nvmf_identify_kernel_target 00:25:17.987 ************************************ 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.987 ************************************ 00:25:17.987 START TEST nvmf_auth_host 00:25:17.987 ************************************ 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:17.987 * Looking for test storage... 00:25:17.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:17.987 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:18.245 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:18.245 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:18.245 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:18.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.246 --rc genhtml_branch_coverage=1 00:25:18.246 --rc genhtml_function_coverage=1 00:25:18.246 --rc genhtml_legend=1 00:25:18.246 --rc geninfo_all_blocks=1 00:25:18.246 --rc geninfo_unexecuted_blocks=1 00:25:18.246 00:25:18.246 ' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:18.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.246 --rc genhtml_branch_coverage=1 00:25:18.246 --rc genhtml_function_coverage=1 00:25:18.246 --rc genhtml_legend=1 00:25:18.246 --rc geninfo_all_blocks=1 00:25:18.246 --rc geninfo_unexecuted_blocks=1 00:25:18.246 00:25:18.246 ' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:18.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.246 --rc genhtml_branch_coverage=1 00:25:18.246 --rc genhtml_function_coverage=1 00:25:18.246 --rc genhtml_legend=1 00:25:18.246 --rc geninfo_all_blocks=1 00:25:18.246 --rc geninfo_unexecuted_blocks=1 00:25:18.246 00:25:18.246 ' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:18.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.246 --rc genhtml_branch_coverage=1 00:25:18.246 --rc genhtml_function_coverage=1 00:25:18.246 --rc genhtml_legend=1 00:25:18.246 --rc geninfo_all_blocks=1 00:25:18.246 --rc geninfo_unexecuted_blocks=1 00:25:18.246 00:25:18.246 ' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:18.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:18.246 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:18.247 09:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:20.812 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:20.813 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:20.813 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:20.813 Found net devices under 0000:09:00.0: cvl_0_0 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:20.813 Found net devices under 0000:09:00.1: cvl_0_1 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:25:20.813 00:25:20.813 --- 10.0.0.2 ping statistics --- 00:25:20.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.813 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:20.813 00:25:20.813 --- 10.0.0.1 ping statistics --- 00:25:20.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.813 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3830384 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3830384 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3830384 ']' 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.813 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bc51b125056c5d76cdead5841068dac6 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1ui 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bc51b125056c5d76cdead5841068dac6 0 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bc51b125056c5d76cdead5841068dac6 0 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bc51b125056c5d76cdead5841068dac6 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1ui 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1ui 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.1ui 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9246049c47a1a76f30dec477040b8ff49c7ea7e8bf42479055825224bb8b1776 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.m0l 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9246049c47a1a76f30dec477040b8ff49c7ea7e8bf42479055825224bb8b1776 3 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9246049c47a1a76f30dec477040b8ff49c7ea7e8bf42479055825224bb8b1776 3 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9246049c47a1a76f30dec477040b8ff49c7ea7e8bf42479055825224bb8b1776 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.m0l 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.m0l 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.m0l 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=52ba12701fe7b0e0cdad4430d6c4b6996a129640dda914f5 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kMC 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 52ba12701fe7b0e0cdad4430d6c4b6996a129640dda914f5 0 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 52ba12701fe7b0e0cdad4430d6c4b6996a129640dda914f5 0 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=52ba12701fe7b0e0cdad4430d6c4b6996a129640dda914f5 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:20.814 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kMC 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kMC 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kMC 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=28f56a99f6e414d4a798d320a9c44be238b956b2d4b81eaa 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.mV2 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 28f56a99f6e414d4a798d320a9c44be238b956b2d4b81eaa 2 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 28f56a99f6e414d4a798d320a9c44be238b956b2d4b81eaa 2 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=28f56a99f6e414d4a798d320a9c44be238b956b2d4b81eaa 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.mV2 00:25:21.073 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.mV2 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.mV2 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b9430d6f16a6d360569107cf37500019 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HVL 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b9430d6f16a6d360569107cf37500019 1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b9430d6f16a6d360569107cf37500019 1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b9430d6f16a6d360569107cf37500019 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HVL 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HVL 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.HVL 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8624b9fdf40ee22749d48bac1eb23258 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dTM 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8624b9fdf40ee22749d48bac1eb23258 1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8624b9fdf40ee22749d48bac1eb23258 1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8624b9fdf40ee22749d48bac1eb23258 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dTM 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dTM 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.dTM 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aa808509013ecda4da1ebda590870ddc864167c4d5df0c4e 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PmC 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aa808509013ecda4da1ebda590870ddc864167c4d5df0c4e 2 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aa808509013ecda4da1ebda590870ddc864167c4d5df0c4e 2 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aa808509013ecda4da1ebda590870ddc864167c4d5df0c4e 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PmC 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PmC 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.PmC 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=47db0c02726c957a6f1f0bbd3be8d98e 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2zG 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 47db0c02726c957a6f1f0bbd3be8d98e 0 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 47db0c02726c957a6f1f0bbd3be8d98e 0 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=47db0c02726c957a6f1f0bbd3be8d98e 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:21.074 09:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2zG 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2zG 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.2zG 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=df113989eb6976509c1f3cd63124ce0ca531e40b82ae5fec896409c1d03e308e 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.eYk 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key df113989eb6976509c1f3cd63124ce0ca531e40b82ae5fec896409c1d03e308e 3 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 df113989eb6976509c1f3cd63124ce0ca531e40b82ae5fec896409c1d03e308e 3 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=df113989eb6976509c1f3cd63124ce0ca531e40b82ae5fec896409c1d03e308e 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.eYk 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.eYk 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eYk 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3830384 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3830384 ']' 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.333 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1ui 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.m0l ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m0l 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kMC 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.mV2 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mV2 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.HVL 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.dTM ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dTM 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.PmC 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.2zG ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.2zG 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eYk 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.592 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:21.593 09:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:22.965 Waiting for block devices as requested 00:25:22.965 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:22.965 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:22.965 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:22.965 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:22.965 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:23.224 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:23.224 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:23.224 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:23.224 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:25:23.482 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:23.482 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:23.482 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:23.740 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:23.740 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:23.740 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:23.998 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:23.998 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:24.564 No valid GPT data, bailing 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:24.564 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:25:24.565 00:25:24.565 Discovery Log Number of Records 2, Generation counter 2 00:25:24.565 =====Discovery Log Entry 0====== 00:25:24.565 trtype: tcp 00:25:24.565 adrfam: ipv4 00:25:24.565 subtype: current discovery subsystem 00:25:24.565 treq: not specified, sq flow control disable supported 00:25:24.565 portid: 1 00:25:24.565 trsvcid: 4420 00:25:24.565 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:24.565 traddr: 10.0.0.1 00:25:24.565 eflags: none 00:25:24.565 sectype: none 00:25:24.565 =====Discovery Log Entry 1====== 00:25:24.565 trtype: tcp 00:25:24.565 adrfam: ipv4 00:25:24.565 subtype: nvme subsystem 00:25:24.565 treq: not specified, sq flow control disable supported 00:25:24.565 portid: 1 00:25:24.565 trsvcid: 4420 00:25:24.565 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:24.565 traddr: 10.0.0.1 00:25:24.565 eflags: none 00:25:24.565 sectype: none 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.565 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.822 nvme0n1 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.822 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.823 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.823 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.823 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.079 nvme0n1 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:25.079 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.080 nvme0n1 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.080 09:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.337 nvme0n1 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.337 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 nvme0n1 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.594 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.851 nvme0n1 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.851 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.108 nvme0n1 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.108 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.109 09:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.367 nvme0n1 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.367 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.624 nvme0n1 00:25:26.624 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.625 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.883 nvme0n1 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.883 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.141 nvme0n1 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.141 09:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.141 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.399 nvme0n1 00:25:27.399 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.399 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.399 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.399 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.399 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.399 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.657 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.658 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.916 nvme0n1 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.916 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.175 nvme0n1 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.175 09:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.175 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.433 nvme0n1 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.433 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.434 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.692 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.950 nvme0n1 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.950 09:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.517 nvme0n1 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.517 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.084 nvme0n1 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.084 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.085 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.085 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.085 09:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.653 nvme0n1 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.653 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.654 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.912 nvme0n1 00:25:30.912 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.912 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.912 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.912 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.912 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.912 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.171 09:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.736 nvme0n1 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.736 09:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.669 nvme0n1 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.669 09:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.602 nvme0n1 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.603 09:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.536 nvme0n1 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.536 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.537 09:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.471 nvme0n1 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.471 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.472 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.038 nvme0n1 00:25:36.038 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.038 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.038 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.038 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.038 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.038 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.297 09:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.297 nvme0n1 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.297 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.557 nvme0n1 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.557 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.558 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.558 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.558 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.558 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.816 nvme0n1 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.816 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.817 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.075 nvme0n1 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.075 09:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.333 nvme0n1 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.333 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.334 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.592 nvme0n1 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.592 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.593 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.851 nvme0n1 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.851 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.852 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.111 nvme0n1 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.111 09:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.369 nvme0n1 00:25:38.369 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.369 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.369 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.369 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.369 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.370 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.629 nvme0n1 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.629 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.887 nvme0n1 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.887 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.145 nvme0n1 00:25:39.145 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.145 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.145 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.145 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.145 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.145 09:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.145 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.403 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.403 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.403 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.403 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.661 nvme0n1 00:25:39.661 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.661 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.661 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.661 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.662 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.920 nvme0n1 00:25:39.920 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.920 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.920 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.921 09:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.180 nvme0n1 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.180 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.746 nvme0n1 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.746 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.747 09:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.314 nvme0n1 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.314 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.315 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.315 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.315 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.315 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.315 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.315 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.881 nvme0n1 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.881 09:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.460 nvme0n1 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.460 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.461 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.033 nvme0n1 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.033 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.034 09:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 nvme0n1 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.035 09:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.969 nvme0n1 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.969 09:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.903 nvme0n1 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.903 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.904 09:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.866 nvme0n1 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.866 09:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.432 nvme0n1 00:25:47.432 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.432 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.432 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.432 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.432 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.432 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.690 nvme0n1 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:47.690 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.691 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.950 nvme0n1 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.950 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.209 nvme0n1 00:25:48.209 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.209 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.209 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.209 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.209 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.209 09:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.209 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.468 nvme0n1 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.468 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.727 nvme0n1 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.727 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.986 nvme0n1 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:48.986 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.987 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.246 nvme0n1 00:25:49.246 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.246 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.246 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.246 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.246 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.246 09:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.246 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.504 nvme0n1 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.504 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.505 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.763 nvme0n1 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.763 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.764 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.022 nvme0n1 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.022 09:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.282 nvme0n1 00:25:50.282 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.283 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.541 nvme0n1 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.541 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.799 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.800 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.058 nvme0n1 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.058 09:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.316 nvme0n1 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.316 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 nvme0n1 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.575 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.141 nvme0n1 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.141 09:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.141 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.711 nvme0n1 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.711 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.712 09:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.277 nvme0n1 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.277 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.535 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.793 nvme0n1 00:25:53.793 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.793 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.793 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.793 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.793 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.793 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.051 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 09:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.617 nvme0n1 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmM1MWIxMjUwNTZjNWQ3NmNkZWFkNTg0MTA2OGRhYzZGWNvi: 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTI0NjA0OWM0N2ExYTc2ZjMwZGVjNDc3MDQwYjhmZjQ5YzdlYTdlOGJmNDI0NzkwNTU4MjUyMjRiYjhiMTc3NgkRMcg=: 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.617 09:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.551 nvme0n1 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.551 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.552 09:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.486 nvme0n1 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.486 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.487 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.487 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.487 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.052 nvme0n1 00:25:57.052 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.052 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.052 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.052 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.052 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.311 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.311 09:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE4MDg1MDkwMTNlY2RhNGRhMWViZGE1OTA4NzBkZGM4NjQxNjdjNGQ1ZGYwYzRlcj2XEA==: 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: ]] 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDdkYjBjMDI3MjZjOTU3YTZmMWYwYmJkM2JlOGQ5OGUHs3C+: 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.311 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.246 nvme0n1 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGYxMTM5ODllYjY5NzY1MDljMWYzY2Q2MzEyNGNlMGNhNTMxZTQwYjgyYWU1ZmVjODk2NDA5YzFkMDNlMzA4ZYGcxZM=: 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.246 09:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.180 nvme0n1 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.180 request: 00:25:59.180 { 00:25:59.180 "name": "nvme0", 00:25:59.180 "trtype": "tcp", 00:25:59.180 "traddr": "10.0.0.1", 00:25:59.180 "adrfam": "ipv4", 00:25:59.180 "trsvcid": "4420", 00:25:59.180 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:59.180 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:59.180 "prchk_reftag": false, 00:25:59.180 "prchk_guard": false, 00:25:59.180 "hdgst": false, 00:25:59.180 "ddgst": false, 00:25:59.180 "allow_unrecognized_csi": false, 00:25:59.180 "method": "bdev_nvme_attach_controller", 00:25:59.180 "req_id": 1 00:25:59.180 } 00:25:59.180 Got JSON-RPC error response 00:25:59.180 response: 00:25:59.180 { 00:25:59.180 "code": -5, 00:25:59.180 "message": "Input/output error" 00:25:59.180 } 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.180 09:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.180 request: 00:25:59.180 { 00:25:59.180 "name": "nvme0", 00:25:59.181 "trtype": "tcp", 00:25:59.181 "traddr": "10.0.0.1", 00:25:59.181 "adrfam": "ipv4", 00:25:59.181 "trsvcid": "4420", 00:25:59.181 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:59.181 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:59.181 "prchk_reftag": false, 00:25:59.181 "prchk_guard": false, 00:25:59.181 "hdgst": false, 00:25:59.181 "ddgst": false, 00:25:59.181 "dhchap_key": "key2", 00:25:59.181 "allow_unrecognized_csi": false, 00:25:59.181 "method": "bdev_nvme_attach_controller", 00:25:59.181 "req_id": 1 00:25:59.181 } 00:25:59.181 Got JSON-RPC error response 00:25:59.181 response: 00:25:59.181 { 00:25:59.181 "code": -5, 00:25:59.181 "message": "Input/output error" 00:25:59.181 } 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.181 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.439 request: 00:25:59.439 { 00:25:59.439 "name": "nvme0", 00:25:59.439 "trtype": "tcp", 00:25:59.439 "traddr": "10.0.0.1", 00:25:59.439 "adrfam": "ipv4", 00:25:59.439 "trsvcid": "4420", 00:25:59.439 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:59.439 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:59.439 "prchk_reftag": false, 00:25:59.439 "prchk_guard": false, 00:25:59.439 "hdgst": false, 00:25:59.439 "ddgst": false, 00:25:59.439 "dhchap_key": "key1", 00:25:59.439 "dhchap_ctrlr_key": "ckey2", 00:25:59.439 "allow_unrecognized_csi": false, 00:25:59.439 "method": "bdev_nvme_attach_controller", 00:25:59.439 "req_id": 1 00:25:59.439 } 00:25:59.439 Got JSON-RPC error response 00:25:59.439 response: 00:25:59.439 { 00:25:59.439 "code": -5, 00:25:59.439 "message": "Input/output error" 00:25:59.439 } 00:25:59.439 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.439 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:59.439 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.439 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.439 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.439 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:59.439 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.440 nvme0n1 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.440 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.698 request: 00:25:59.698 { 00:25:59.698 "name": "nvme0", 00:25:59.698 "dhchap_key": "key1", 00:25:59.698 "dhchap_ctrlr_key": "ckey2", 00:25:59.698 "method": "bdev_nvme_set_keys", 00:25:59.698 "req_id": 1 00:25:59.698 } 00:25:59.698 Got JSON-RPC error response 00:25:59.698 response: 00:25:59.698 { 00:25:59.698 "code": -13, 00:25:59.698 "message": "Permission denied" 00:25:59.698 } 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:59.698 09:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:00.633 09:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.633 09:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:00.633 09:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.633 09:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 09:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.891 09:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:00.891 09:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiYTEyNzAxZmU3YjBlMGNkYWQ0NDMwZDZjNGI2OTk2YTEyOTY0MGRkYTkxNGY1O6q9JQ==: 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: ]] 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjhmNTZhOTlmNmU0MTRkNGE3OThkMzIwYTljNDRiZTIzOGI5NTZiMmQ0YjgxZWFhAoXigA==: 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.828 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.087 nvme0n1 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjk0MzBkNmYxNmE2ZDM2MDU2OTEwN2NmMzc1MDAwMTmtvloF: 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: ]] 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYyNGI5ZmRmNDBlZTIyNzQ5ZDQ4YmFjMWViMjMyNTjhbu0t: 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.087 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.087 request: 00:26:02.088 { 00:26:02.088 "name": "nvme0", 00:26:02.088 "dhchap_key": "key2", 00:26:02.088 "dhchap_ctrlr_key": "ckey1", 00:26:02.088 "method": "bdev_nvme_set_keys", 00:26:02.088 "req_id": 1 00:26:02.088 } 00:26:02.088 Got JSON-RPC error response 00:26:02.088 response: 00:26:02.088 { 00:26:02.088 "code": -13, 00:26:02.088 "message": "Permission denied" 00:26:02.088 } 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:02.088 09:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:03.022 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:03.022 rmmod nvme_tcp 00:26:03.022 rmmod nvme_fabrics 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3830384 ']' 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3830384 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3830384 ']' 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3830384 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3830384 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3830384' 00:26:03.280 killing process with pid 3830384 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3830384 00:26:03.280 09:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3830384 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.538 09:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:05.444 09:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:06.820 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:06.820 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:06.820 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:06.820 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:06.820 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:06.820 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:06.820 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:06.820 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:06.820 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:06.820 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:06.820 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:06.820 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:06.820 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:06.820 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:06.820 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:06.820 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:07.754 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:08.014 09:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.1ui /tmp/spdk.key-null.kMC /tmp/spdk.key-sha256.HVL /tmp/spdk.key-sha384.PmC /tmp/spdk.key-sha512.eYk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:08.014 09:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:08.952 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:08.952 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:08.952 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:08.952 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:08.952 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:08.952 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:08.952 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:08.952 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:08.952 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:08.952 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:09.210 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:09.210 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:09.210 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:09.210 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:09.210 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:09.210 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:09.210 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:09.210 00:26:09.210 real 0m51.243s 00:26:09.210 user 0m48.795s 00:26:09.210 sys 0m6.212s 00:26:09.210 09:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.210 09:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.210 ************************************ 00:26:09.210 END TEST nvmf_auth_host 00:26:09.210 ************************************ 00:26:09.210 09:59:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:09.210 09:59:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:09.210 09:59:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:09.210 09:59:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.210 09:59:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.210 ************************************ 00:26:09.210 START TEST nvmf_digest 00:26:09.210 ************************************ 00:26:09.210 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:09.472 * Looking for test storage... 00:26:09.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:09.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.472 --rc genhtml_branch_coverage=1 00:26:09.472 --rc genhtml_function_coverage=1 00:26:09.472 --rc genhtml_legend=1 00:26:09.472 --rc geninfo_all_blocks=1 00:26:09.472 --rc geninfo_unexecuted_blocks=1 00:26:09.472 00:26:09.472 ' 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:09.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.472 --rc genhtml_branch_coverage=1 00:26:09.472 --rc genhtml_function_coverage=1 00:26:09.472 --rc genhtml_legend=1 00:26:09.472 --rc geninfo_all_blocks=1 00:26:09.472 --rc geninfo_unexecuted_blocks=1 00:26:09.472 00:26:09.472 ' 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:09.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.472 --rc genhtml_branch_coverage=1 00:26:09.472 --rc genhtml_function_coverage=1 00:26:09.472 --rc genhtml_legend=1 00:26:09.472 --rc geninfo_all_blocks=1 00:26:09.472 --rc geninfo_unexecuted_blocks=1 00:26:09.472 00:26:09.472 ' 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:09.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.472 --rc genhtml_branch_coverage=1 00:26:09.472 --rc genhtml_function_coverage=1 00:26:09.472 --rc genhtml_legend=1 00:26:09.472 --rc geninfo_all_blocks=1 00:26:09.472 --rc geninfo_unexecuted_blocks=1 00:26:09.472 00:26:09.472 ' 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.472 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:09.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:09.473 09:59:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:11.483 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.483 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:11.484 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:11.484 Found net devices under 0000:09:00.0: cvl_0_0 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:11.484 Found net devices under 0000:09:00.1: cvl_0_1 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.484 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:11.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:26:11.743 00:26:11.743 --- 10.0.0.2 ping statistics --- 00:26:11.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.743 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:26:11.743 00:26:11.743 --- 10.0.0.1 ping statistics --- 00:26:11.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.743 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:26:11.743 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.744 ************************************ 00:26:11.744 START TEST nvmf_digest_clean 00:26:11.744 ************************************ 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3840014 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3840014 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3840014 ']' 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.744 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.744 [2024-11-20 09:59:48.583717] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:11.744 [2024-11-20 09:59:48.583804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.002 [2024-11-20 09:59:48.655789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.002 [2024-11-20 09:59:48.709824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.002 [2024-11-20 09:59:48.709896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.002 [2024-11-20 09:59:48.709909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.002 [2024-11-20 09:59:48.709920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.002 [2024-11-20 09:59:48.709945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.002 [2024-11-20 09:59:48.710502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.002 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:12.261 null0 00:26:12.261 [2024-11-20 09:59:48.943202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.261 [2024-11-20 09:59:48.967464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3840038 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3840038 /var/tmp/bperf.sock 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3840038 ']' 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:12.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.261 09:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:12.261 [2024-11-20 09:59:49.016117] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:12.261 [2024-11-20 09:59:49.016194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840038 ] 00:26:12.261 [2024-11-20 09:59:49.080680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.261 [2024-11-20 09:59:49.137561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.517 09:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.517 09:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:12.517 09:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:12.517 09:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:12.517 09:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:13.082 09:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.082 09:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.340 nvme0n1 00:26:13.598 09:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:13.598 09:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:13.598 Running I/O for 2 seconds... 00:26:15.904 19102.00 IOPS, 74.62 MiB/s [2024-11-20T08:59:52.818Z] 19233.00 IOPS, 75.13 MiB/s 00:26:15.904 Latency(us) 00:26:15.904 [2024-11-20T08:59:52.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.904 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:15.904 nvme0n1 : 2.05 18879.53 73.75 0.00 0.00 6639.74 2997.67 43496.49 00:26:15.904 [2024-11-20T08:59:52.818Z] =================================================================================================================== 00:26:15.904 [2024-11-20T08:59:52.818Z] Total : 18879.53 73.75 0.00 0.00 6639.74 2997.67 43496.49 00:26:15.904 { 00:26:15.904 "results": [ 00:26:15.904 { 00:26:15.904 "job": "nvme0n1", 00:26:15.904 "core_mask": "0x2", 00:26:15.904 "workload": "randread", 00:26:15.904 "status": "finished", 00:26:15.904 "queue_depth": 128, 00:26:15.904 "io_size": 4096, 00:26:15.904 "runtime": 2.045337, 00:26:15.904 "iops": 18879.52938806661, 00:26:15.904 "mibps": 73.7481616721352, 00:26:15.904 "io_failed": 0, 00:26:15.904 "io_timeout": 0, 00:26:15.904 "avg_latency_us": 6639.744917701335, 00:26:15.904 "min_latency_us": 2997.6651851851852, 00:26:15.904 "max_latency_us": 43496.485925925925 00:26:15.904 } 00:26:15.904 ], 00:26:15.904 "core_count": 1 00:26:15.904 } 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:15.904 | select(.opcode=="crc32c") 00:26:15.904 | "\(.module_name) \(.executed)"' 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3840038 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3840038 ']' 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3840038 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840038 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840038' 00:26:15.904 killing process with pid 3840038 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3840038 00:26:15.904 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.904 00:26:15.904 Latency(us) 00:26:15.904 [2024-11-20T08:59:52.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.904 [2024-11-20T08:59:52.818Z] =================================================================================================================== 00:26:15.904 [2024-11-20T08:59:52.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.904 09:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3840038 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3840566 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3840566 /var/tmp/bperf.sock 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3840566 ']' 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.162 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:16.163 [2024-11-20 09:59:53.063248] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:16.163 [2024-11-20 09:59:53.063352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840566 ] 00:26:16.163 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.163 Zero copy mechanism will not be used. 00:26:16.421 [2024-11-20 09:59:53.128567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.421 [2024-11-20 09:59:53.184037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.421 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.421 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:16.421 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:16.421 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:16.421 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:16.988 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.988 09:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.245 nvme0n1 00:26:17.245 09:59:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:17.245 09:59:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.503 Zero copy mechanism will not be used. 00:26:17.503 Running I/O for 2 seconds... 00:26:19.372 4726.00 IOPS, 590.75 MiB/s [2024-11-20T08:59:56.286Z] 4622.50 IOPS, 577.81 MiB/s 00:26:19.372 Latency(us) 00:26:19.372 [2024-11-20T08:59:56.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.372 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:19.372 nvme0n1 : 2.00 4626.88 578.36 0.00 0.00 3453.41 667.50 5291.43 00:26:19.372 [2024-11-20T08:59:56.286Z] =================================================================================================================== 00:26:19.372 [2024-11-20T08:59:56.286Z] Total : 4626.88 578.36 0.00 0.00 3453.41 667.50 5291.43 00:26:19.372 { 00:26:19.372 "results": [ 00:26:19.372 { 00:26:19.372 "job": "nvme0n1", 00:26:19.372 "core_mask": "0x2", 00:26:19.372 "workload": "randread", 00:26:19.372 "status": "finished", 00:26:19.372 "queue_depth": 16, 00:26:19.372 "io_size": 131072, 00:26:19.372 "runtime": 2.004375, 00:26:19.372 "iops": 4626.878702837543, 00:26:19.372 "mibps": 578.3598378546928, 00:26:19.372 "io_failed": 0, 00:26:19.372 "io_timeout": 0, 00:26:19.372 "avg_latency_us": 3453.414420242973, 00:26:19.372 "min_latency_us": 667.4962962962964, 00:26:19.372 "max_latency_us": 5291.425185185185 00:26:19.372 } 00:26:19.372 ], 00:26:19.372 "core_count": 1 00:26:19.372 } 00:26:19.372 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:19.630 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:19.630 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:19.630 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:19.630 | select(.opcode=="crc32c") 00:26:19.630 | "\(.module_name) \(.executed)"' 00:26:19.630 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3840566 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3840566 ']' 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3840566 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840566 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840566' 00:26:19.888 killing process with pid 3840566 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3840566 00:26:19.888 Received shutdown signal, test time was about 2.000000 seconds 00:26:19.888 00:26:19.888 Latency(us) 00:26:19.888 [2024-11-20T08:59:56.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.888 [2024-11-20T08:59:56.802Z] =================================================================================================================== 00:26:19.888 [2024-11-20T08:59:56.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:19.888 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3840566 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3840978 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3840978 /var/tmp/bperf.sock 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3840978 ']' 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.146 09:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.146 [2024-11-20 09:59:56.874775] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:20.146 [2024-11-20 09:59:56.874853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840978 ] 00:26:20.146 [2024-11-20 09:59:56.943371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.146 [2024-11-20 09:59:57.004049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.404 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.404 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:20.404 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:20.404 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:20.404 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:20.661 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.661 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.226 nvme0n1 00:26:21.226 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:21.226 09:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.226 Running I/O for 2 seconds... 00:26:23.530 21897.00 IOPS, 85.54 MiB/s [2024-11-20T09:00:00.444Z] 21238.00 IOPS, 82.96 MiB/s 00:26:23.530 Latency(us) 00:26:23.530 [2024-11-20T09:00:00.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.530 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:23.530 nvme0n1 : 2.01 21229.06 82.93 0.00 0.00 6015.87 2633.58 9029.40 00:26:23.530 [2024-11-20T09:00:00.444Z] =================================================================================================================== 00:26:23.530 [2024-11-20T09:00:00.444Z] Total : 21229.06 82.93 0.00 0.00 6015.87 2633.58 9029.40 00:26:23.530 { 00:26:23.530 "results": [ 00:26:23.530 { 00:26:23.531 "job": "nvme0n1", 00:26:23.531 "core_mask": "0x2", 00:26:23.531 "workload": "randwrite", 00:26:23.531 "status": "finished", 00:26:23.531 "queue_depth": 128, 00:26:23.531 "io_size": 4096, 00:26:23.531 "runtime": 2.008379, 00:26:23.531 "iops": 21229.060849570724, 00:26:23.531 "mibps": 82.92601894363564, 00:26:23.531 "io_failed": 0, 00:26:23.531 "io_timeout": 0, 00:26:23.531 "avg_latency_us": 6015.86651739271, 00:26:23.531 "min_latency_us": 2633.5762962962963, 00:26:23.531 "max_latency_us": 9029.404444444444 00:26:23.531 } 00:26:23.531 ], 00:26:23.531 "core_count": 1 00:26:23.531 } 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:23.531 | select(.opcode=="crc32c") 00:26:23.531 | "\(.module_name) \(.executed)"' 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3840978 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3840978 ']' 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3840978 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840978 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840978' 00:26:23.531 killing process with pid 3840978 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3840978 00:26:23.531 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.531 00:26:23.531 Latency(us) 00:26:23.531 [2024-11-20T09:00:00.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.531 [2024-11-20T09:00:00.445Z] =================================================================================================================== 00:26:23.531 [2024-11-20T09:00:00.445Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.531 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3840978 00:26:23.788 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:23.788 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3841495 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3841495 /var/tmp/bperf.sock 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3841495 ']' 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.789 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:23.789 [2024-11-20 10:00:00.677414] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:23.789 [2024-11-20 10:00:00.677504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841495 ] 00:26:23.789 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.789 Zero copy mechanism will not be used. 00:26:24.046 [2024-11-20 10:00:00.746974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.046 [2024-11-20 10:00:00.807732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.046 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.046 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:24.046 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:24.046 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:24.046 10:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:24.621 10:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.621 10:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.879 nvme0n1 00:26:24.879 10:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:24.879 10:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.136 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.136 Zero copy mechanism will not be used. 00:26:25.136 Running I/O for 2 seconds... 00:26:26.999 5687.00 IOPS, 710.88 MiB/s [2024-11-20T09:00:03.913Z] 5656.00 IOPS, 707.00 MiB/s 00:26:26.999 Latency(us) 00:26:26.999 [2024-11-20T09:00:03.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.999 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:26.999 nvme0n1 : 2.00 5653.59 706.70 0.00 0.00 2823.42 2135.99 4757.43 00:26:26.999 [2024-11-20T09:00:03.913Z] =================================================================================================================== 00:26:26.999 [2024-11-20T09:00:03.913Z] Total : 5653.59 706.70 0.00 0.00 2823.42 2135.99 4757.43 00:26:26.999 { 00:26:26.999 "results": [ 00:26:26.999 { 00:26:26.999 "job": "nvme0n1", 00:26:26.999 "core_mask": "0x2", 00:26:26.999 "workload": "randwrite", 00:26:26.999 "status": "finished", 00:26:26.999 "queue_depth": 16, 00:26:26.999 "io_size": 131072, 00:26:26.999 "runtime": 2.003681, 00:26:26.999 "iops": 5653.594559213767, 00:26:26.999 "mibps": 706.6993199017209, 00:26:26.999 "io_failed": 0, 00:26:26.999 "io_timeout": 0, 00:26:26.999 "avg_latency_us": 2823.4180539861895, 00:26:26.999 "min_latency_us": 2135.988148148148, 00:26:26.999 "max_latency_us": 4757.4281481481485 00:26:26.999 } 00:26:26.999 ], 00:26:26.999 "core_count": 1 00:26:26.999 } 00:26:26.999 10:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:26.999 10:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:26.999 10:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:26.999 10:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:26.999 | select(.opcode=="crc32c") 00:26:26.999 | "\(.module_name) \(.executed)"' 00:26:26.999 10:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3841495 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3841495 ']' 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3841495 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841495 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841495' 00:26:27.564 killing process with pid 3841495 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3841495 00:26:27.564 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.564 00:26:27.564 Latency(us) 00:26:27.564 [2024-11-20T09:00:04.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.564 [2024-11-20T09:00:04.478Z] =================================================================================================================== 00:26:27.564 [2024-11-20T09:00:04.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3841495 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3840014 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3840014 ']' 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3840014 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.564 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840014 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840014' 00:26:27.823 killing process with pid 3840014 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3840014 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3840014 00:26:27.823 00:26:27.823 real 0m16.176s 00:26:27.823 user 0m32.228s 00:26:27.823 sys 0m4.315s 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:27.823 ************************************ 00:26:27.823 END TEST nvmf_digest_clean 00:26:27.823 ************************************ 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.823 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:28.081 ************************************ 00:26:28.081 START TEST nvmf_digest_error 00:26:28.081 ************************************ 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3842060 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3842060 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3842060 ']' 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.081 10:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.081 [2024-11-20 10:00:04.810875] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:28.081 [2024-11-20 10:00:04.810947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.081 [2024-11-20 10:00:04.879998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.081 [2024-11-20 10:00:04.934027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.081 [2024-11-20 10:00:04.934083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.081 [2024-11-20 10:00:04.934110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.081 [2024-11-20 10:00:04.934121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.081 [2024-11-20 10:00:04.934131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.081 [2024-11-20 10:00:04.934730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.340 [2024-11-20 10:00:05.063435] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.340 null0 00:26:28.340 [2024-11-20 10:00:05.180669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.340 [2024-11-20 10:00:05.204886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3842197 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3842197 /var/tmp/bperf.sock 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3842197 ']' 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.340 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.340 [2024-11-20 10:00:05.251810] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:28.340 [2024-11-20 10:00:05.251874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842197 ] 00:26:28.598 [2024-11-20 10:00:05.317619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.598 [2024-11-20 10:00:05.375578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.598 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.598 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:28.598 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.598 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.856 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:28.856 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.857 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.115 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.115 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.115 10:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.374 nvme0n1 00:26:29.374 10:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:29.374 10:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.374 10:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.374 10:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.374 10:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.374 10:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.374 Running I/O for 2 seconds... 00:26:29.374 [2024-11-20 10:00:06.241825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.374 [2024-11-20 10:00:06.241884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.374 [2024-11-20 10:00:06.241906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.374 [2024-11-20 10:00:06.258264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.374 [2024-11-20 10:00:06.258325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.374 [2024-11-20 10:00:06.258348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.374 [2024-11-20 10:00:06.270762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.374 [2024-11-20 10:00:06.270792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.374 [2024-11-20 10:00:06.270824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.374 [2024-11-20 10:00:06.283989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.374 [2024-11-20 10:00:06.284021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.374 [2024-11-20 10:00:06.284038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.298460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.298492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.298510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.311135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.311166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.311203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.323048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.323077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.323107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.337217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.337247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.337280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.350191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.350238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.361818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.361846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.361877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.375719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.375763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.375779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.391102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.391130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.391160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.403158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.403186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.403216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.416561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.416589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.416620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.432838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.432866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.432898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.445749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.445779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.445811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.459764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.459795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.459828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.471007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.471037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.471070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.484829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.484858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.484895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.498580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.498610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.498644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.514560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.514607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.514625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.526392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.526423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.526440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.632 [2024-11-20 10:00:06.541033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.632 [2024-11-20 10:00:06.541063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.632 [2024-11-20 10:00:06.541096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.555916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.555945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.555978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.569885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.569914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.569945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.582862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.582892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.582923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.595022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.595050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.595083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.608933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.608969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.609002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.623147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.623195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.623213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.637707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.637736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.637767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.652803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.652833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.652865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.663246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.663275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.663314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.677950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.677980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.677996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.690443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.690487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.690504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.705888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.705918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.705950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.717959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.717990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.718008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.732157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.732187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.891 [2024-11-20 10:00:06.732221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.891 [2024-11-20 10:00:06.748569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.891 [2024-11-20 10:00:06.748613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.892 [2024-11-20 10:00:06.748630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.892 [2024-11-20 10:00:06.759385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.892 [2024-11-20 10:00:06.759416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.892 [2024-11-20 10:00:06.759448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.892 [2024-11-20 10:00:06.775570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.892 [2024-11-20 10:00:06.775603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.892 [2024-11-20 10:00:06.775621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.892 [2024-11-20 10:00:06.789967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:29.892 [2024-11-20 10:00:06.789996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.892 [2024-11-20 10:00:06.790014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.804337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.804369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.804387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.817984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.818014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.818047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.834255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.834283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.834322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.847317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.847349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.847375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.858622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.858666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.858682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.873280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.873333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.873351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.890113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.890142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.890174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.905953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.905983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.906019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.920158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.920189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.920207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.931053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.931082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.931113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.946271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.946299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.946339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.959485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.959532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.959550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.971325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.971369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.971386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:06.986659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:06.986688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:06.986720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:07.003151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:07.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:07.003213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:07.018828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:07.018858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:07.018891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:07.032106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:07.032136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:07.032168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:07.046411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:07.046443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:07.046461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.150 [2024-11-20 10:00:07.058073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.150 [2024-11-20 10:00:07.058105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.150 [2024-11-20 10:00:07.058123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.074805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.074837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.074870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.090136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.090166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.090205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.105142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.105187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.105204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.117533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.117562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.117596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.131482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.131528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.131546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.146148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.146178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.146210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.157181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.157211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.157244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.172088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.172134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.172152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.187400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.187432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.187450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.203282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.203323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.203343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.218785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.218840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.218860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 18229.00 IOPS, 71.21 MiB/s [2024-11-20T09:00:07.325Z] [2024-11-20 10:00:07.232047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.232077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.232110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.245025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.245054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.245086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.258387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.258419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.258454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.272224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.272254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.272286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.288278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.288333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.288361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.304020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.304051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.304083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.411 [2024-11-20 10:00:07.315100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.411 [2024-11-20 10:00:07.315129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.411 [2024-11-20 10:00:07.315161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.332061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.332092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.332109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.346513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.346545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.346568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.361665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.361697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.361729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.372792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.372820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.372851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.388434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.388466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.388484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.403272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.403313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.403335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.418922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.418953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.418986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.431170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.431199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.431230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.446375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.446407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.446425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.463221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.463251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.463292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.475493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.475525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.475542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.488945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.488974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.489007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.505000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.505030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.505061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.519221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.519251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.519284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.531059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.531088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.531120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.544489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.544517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.544548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.670 [2024-11-20 10:00:07.558076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.670 [2024-11-20 10:00:07.558104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.670 [2024-11-20 10:00:07.558136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.671 [2024-11-20 10:00:07.569494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.671 [2024-11-20 10:00:07.569523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.671 [2024-11-20 10:00:07.569555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.586072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.586117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.586134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.599397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.599427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.599460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.614363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.614394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.614412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.625704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.625733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.625767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.642110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.642138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.642169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.653273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.653327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.653346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.668866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.668895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.668926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.685012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.685042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.685074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.699691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.699720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.699757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.710041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.710069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.710100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.725966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.725996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.726028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.740800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.740829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.740860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.754499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.754530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.754563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.769880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.769910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.769943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.781125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.781153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.781184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.929 [2024-11-20 10:00:07.796005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.929 [2024-11-20 10:00:07.796035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.929 [2024-11-20 10:00:07.796052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.930 [2024-11-20 10:00:07.810681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.930 [2024-11-20 10:00:07.810709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-11-20 10:00:07.810739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.930 [2024-11-20 10:00:07.825723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.930 [2024-11-20 10:00:07.825756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-11-20 10:00:07.825788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.930 [2024-11-20 10:00:07.840447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:30.930 [2024-11-20 10:00:07.840477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.930 [2024-11-20 10:00:07.840510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.855179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.855208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.855241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.866510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.866539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.866570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.879521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.879550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.879581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.895541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.895571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.895588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.906115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.906145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.906161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.921479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.921507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.921539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.934226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.934255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.934270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.946778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.946806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.946822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.958427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.958456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.958487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.971469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.971499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.971531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:07.987749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:07.987778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:07.987809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:08.001822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:08.001851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:08.001883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:08.015205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:08.015235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:08.015267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:08.026437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:08.026468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:08.026502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:08.040969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.188 [2024-11-20 10:00:08.040997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.188 [2024-11-20 10:00:08.041027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.188 [2024-11-20 10:00:08.055986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.189 [2024-11-20 10:00:08.056014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.189 [2024-11-20 10:00:08.056051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.189 [2024-11-20 10:00:08.071654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.189 [2024-11-20 10:00:08.071700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.189 [2024-11-20 10:00:08.071717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.189 [2024-11-20 10:00:08.086001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.189 [2024-11-20 10:00:08.086032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.189 [2024-11-20 10:00:08.086049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.189 [2024-11-20 10:00:08.097764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.189 [2024-11-20 10:00:08.097806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.189 [2024-11-20 10:00:08.097822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.446 [2024-11-20 10:00:08.111940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.446 [2024-11-20 10:00:08.111971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.446 [2024-11-20 10:00:08.111988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.446 [2024-11-20 10:00:08.125130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.446 [2024-11-20 10:00:08.125160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.446 [2024-11-20 10:00:08.125192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.446 [2024-11-20 10:00:08.140633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.446 [2024-11-20 10:00:08.140683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.446 [2024-11-20 10:00:08.140700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.446 [2024-11-20 10:00:08.152822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.446 [2024-11-20 10:00:08.152851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.446 [2024-11-20 10:00:08.152883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.446 [2024-11-20 10:00:08.165685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.446 [2024-11-20 10:00:08.165716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.446 [2024-11-20 10:00:08.165748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.446 [2024-11-20 10:00:08.180661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.447 [2024-11-20 10:00:08.180711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.447 [2024-11-20 10:00:08.180730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.447 [2024-11-20 10:00:08.191484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.447 [2024-11-20 10:00:08.191514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.447 [2024-11-20 10:00:08.191532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.447 [2024-11-20 10:00:08.206280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.447 [2024-11-20 10:00:08.206331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.447 [2024-11-20 10:00:08.206349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.447 [2024-11-20 10:00:08.221840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdfebf0) 00:26:31.447 [2024-11-20 10:00:08.221870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.447 [2024-11-20 10:00:08.221903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.447 18274.00 IOPS, 71.38 MiB/s 00:26:31.447 Latency(us) 00:26:31.447 [2024-11-20T09:00:08.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.447 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:31.447 nvme0n1 : 2.01 18291.19 71.45 0.00 0.00 6988.97 3349.62 21651.15 00:26:31.447 [2024-11-20T09:00:08.361Z] =================================================================================================================== 00:26:31.447 [2024-11-20T09:00:08.361Z] Total : 18291.19 71.45 0.00 0.00 6988.97 3349.62 21651.15 00:26:31.447 { 00:26:31.447 "results": [ 00:26:31.447 { 00:26:31.447 "job": "nvme0n1", 00:26:31.447 "core_mask": "0x2", 00:26:31.447 "workload": "randread", 00:26:31.447 "status": "finished", 00:26:31.447 "queue_depth": 128, 00:26:31.447 "io_size": 4096, 00:26:31.447 "runtime": 2.005118, 00:26:31.447 "iops": 18291.192837528764, 00:26:31.447 "mibps": 71.44997202159674, 00:26:31.447 "io_failed": 0, 00:26:31.447 "io_timeout": 0, 00:26:31.447 "avg_latency_us": 6988.971595270698, 00:26:31.447 "min_latency_us": 3349.617777777778, 00:26:31.447 "max_latency_us": 21651.152592592593 00:26:31.447 } 00:26:31.447 ], 00:26:31.447 "core_count": 1 00:26:31.447 } 00:26:31.447 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.447 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.447 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.447 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.447 | .driver_specific 00:26:31.447 | .nvme_error 00:26:31.447 | .status_code 00:26:31.447 | .command_transient_transport_error' 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3842197 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3842197 ']' 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3842197 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3842197 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3842197' 00:26:31.704 killing process with pid 3842197 00:26:31.704 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3842197 00:26:31.704 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.704 00:26:31.704 Latency(us) 00:26:31.704 [2024-11-20T09:00:08.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.704 [2024-11-20T09:00:08.618Z] =================================================================================================================== 00:26:31.704 [2024-11-20T09:00:08.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.705 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3842197 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3843026 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3843026 /var/tmp/bperf.sock 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3843026 ']' 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.962 10:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.962 [2024-11-20 10:00:08.810087] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:31.962 [2024-11-20 10:00:08.810172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843026 ] 00:26:31.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:31.962 Zero copy mechanism will not be used. 00:26:32.221 [2024-11-20 10:00:08.879851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.221 [2024-11-20 10:00:08.940428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.221 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.221 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:32.221 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.221 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.479 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.479 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.479 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.479 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.479 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.479 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:33.044 nvme0n1 00:26:33.044 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:33.044 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.044 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.044 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.044 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:33.044 10:00:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:33.044 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.044 Zero copy mechanism will not be used. 00:26:33.044 Running I/O for 2 seconds... 00:26:33.044 [2024-11-20 10:00:09.852956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.044 [2024-11-20 10:00:09.853022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.044 [2024-11-20 10:00:09.853045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.044 [2024-11-20 10:00:09.859783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.044 [2024-11-20 10:00:09.859819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.044 [2024-11-20 10:00:09.859838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.044 [2024-11-20 10:00:09.866100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.044 [2024-11-20 10:00:09.866133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.044 [2024-11-20 10:00:09.866152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.044 [2024-11-20 10:00:09.872439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.044 [2024-11-20 10:00:09.872472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.044 [2024-11-20 10:00:09.872490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.044 [2024-11-20 10:00:09.878878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.044 [2024-11-20 10:00:09.878909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.044 [2024-11-20 10:00:09.878944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.044 [2024-11-20 10:00:09.884951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.044 [2024-11-20 10:00:09.884983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.044 [2024-11-20 10:00:09.885003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.044 [2024-11-20 10:00:09.891056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.044 [2024-11-20 10:00:09.891087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.891105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.897297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.897336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.897354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.903912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.903944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.903963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.909420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.909452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.909471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.914591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.914622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.914640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.920048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.920079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.920099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.924081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.924118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.924137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.928203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.928234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.928252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.933190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.933221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.933254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.938678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.938735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.938753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.945018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.945064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.945082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.045 [2024-11-20 10:00:09.952634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.045 [2024-11-20 10:00:09.952666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.045 [2024-11-20 10:00:09.952684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.304 [2024-11-20 10:00:09.958553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.304 [2024-11-20 10:00:09.958610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.304 [2024-11-20 10:00:09.958628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.304 [2024-11-20 10:00:09.964282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.304 [2024-11-20 10:00:09.964337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.304 [2024-11-20 10:00:09.964357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.304 [2024-11-20 10:00:09.969989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.304 [2024-11-20 10:00:09.970019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.304 [2024-11-20 10:00:09.970052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.304 [2024-11-20 10:00:09.975640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.304 [2024-11-20 10:00:09.975671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.304 [2024-11-20 10:00:09.975689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.304 [2024-11-20 10:00:09.980944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:09.980989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:09.981008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:09.986182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:09.986227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:09.986243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:09.991478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:09.991508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:09.991526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:09.997796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:09.997827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:09.997861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.004070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.004104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.004123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.011330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.011370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.011399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.017579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.017613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.017631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.024245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.024277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.024313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.030064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.030096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.030115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.035946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.035978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.035996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.040102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.040149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.040168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.045908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.045946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.045965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.052348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.052381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.052399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.058333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.058366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.058386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.064445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.064477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.064511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.070595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.070628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.070648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.076564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.076603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.076638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.082071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.082103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.082122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.087623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.087655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.087687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.094158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.094188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.094221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.099883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.099914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.099932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.105492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.105524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.105542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.112551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.112583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.112602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.117150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.117194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.117212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.124685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.124713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.124747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.132183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.132213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.132249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.305 [2024-11-20 10:00:10.140097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.305 [2024-11-20 10:00:10.140129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.305 [2024-11-20 10:00:10.140147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.147793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.147823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.147856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.156207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.156252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.156269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.163829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.163875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.163893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.169420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.169467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.169486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.174828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.174858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.174895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.179949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.179979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.180012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.185173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.185201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.185225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.190196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.190224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.190257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.196109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.196155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.196173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.200495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.200525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.200542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.205857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.205901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.205918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.211143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.306 [2024-11-20 10:00:10.211188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.306 [2024-11-20 10:00:10.211205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.306 [2024-11-20 10:00:10.216155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.568 [2024-11-20 10:00:10.216186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.568 [2024-11-20 10:00:10.216203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.568 [2024-11-20 10:00:10.220930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.568 [2024-11-20 10:00:10.220960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.568 [2024-11-20 10:00:10.220978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.568 [2024-11-20 10:00:10.226102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.568 [2024-11-20 10:00:10.226132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.568 [2024-11-20 10:00:10.226149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.568 [2024-11-20 10:00:10.231152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.568 [2024-11-20 10:00:10.231201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.568 [2024-11-20 10:00:10.231219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.568 [2024-11-20 10:00:10.236223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.568 [2024-11-20 10:00:10.236266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.568 [2024-11-20 10:00:10.236284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.568 [2024-11-20 10:00:10.241426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.568 [2024-11-20 10:00:10.241456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.568 [2024-11-20 10:00:10.241474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.568 [2024-11-20 10:00:10.246550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.568 [2024-11-20 10:00:10.246580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.568 [2024-11-20 10:00:10.246611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.568 [2024-11-20 10:00:10.251669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.251713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.251731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.256868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.256896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.256927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.262178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.262209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.262227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.268137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.268182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.268199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.272477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.272509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.272542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.278148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.278193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.278210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.283794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.283841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.283859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.289146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.289194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.289212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.295130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.295174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.295191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.300966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.301023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.301041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.308136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.308183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.308200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.315932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.315978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.315996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.323966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.323998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.324016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.331751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.331797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.331820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.339362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.339393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.339412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.346854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.346911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.346930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.354513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.354544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.354562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.362127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.362172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.362189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.369715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.369747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.369765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.377233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.377265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.377284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.384786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.384817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.384835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.391668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.391714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.391732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.399154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.399190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.399225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.406811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.406841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.406875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.414357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.414389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.414407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.421430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.421462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.421481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.569 [2024-11-20 10:00:10.427655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.569 [2024-11-20 10:00:10.427703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.569 [2024-11-20 10:00:10.427721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.433353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.433385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.433403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.438560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.438591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.438609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.444029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.444061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.444078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.449561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.449593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.449611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.454807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.454838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.454856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.459556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.459587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.459605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.463328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.463359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.463376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.469515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.469546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.469580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.570 [2024-11-20 10:00:10.475638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.570 [2024-11-20 10:00:10.475670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.570 [2024-11-20 10:00:10.475688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.828 [2024-11-20 10:00:10.482718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.828 [2024-11-20 10:00:10.482763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.828 [2024-11-20 10:00:10.482781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.828 [2024-11-20 10:00:10.489035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.828 [2024-11-20 10:00:10.489065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.828 [2024-11-20 10:00:10.489095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.828 [2024-11-20 10:00:10.494333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.828 [2024-11-20 10:00:10.494364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.828 [2024-11-20 10:00:10.494381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.828 [2024-11-20 10:00:10.499358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.828 [2024-11-20 10:00:10.499389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.499412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.504518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.504549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.504567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.509814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.509857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.509875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.514803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.514835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.514853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.520268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.520300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.520326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.525427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.525457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.525475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.530700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.530746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.530763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.536524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.536556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.536574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.543198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.543229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.543263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.548895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.548925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.548957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.554541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.554573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.554592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.560035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.560066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.560099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.565459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.565489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.565507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.570734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.570763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.570794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.575917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.575947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.575982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.580920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.580966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.580988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.586060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.586103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.586119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.591126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.591155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.591178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.596524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.596554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.596571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.601689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.601719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.601751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.606907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.606938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.606955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.612179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.612210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.612228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.618007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.618039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.618057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.623057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.623088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.623106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.628336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.628367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.628385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.633441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.633471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.633489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.638591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.829 [2024-11-20 10:00:10.638645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.829 [2024-11-20 10:00:10.638662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.829 [2024-11-20 10:00:10.643830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.643860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.643878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.649432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.649463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.649480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.656932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.656964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.656982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.663177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.663208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.663226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.668671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.668703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.668721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.674148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.674185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.674203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.677933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.677964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.677982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.682934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.682966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.682984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.690189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.690231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.690247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.696508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.696540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.696559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.701463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.701495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.701513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.706951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.706981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.707014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.713033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.713064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.713100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.718817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.718848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.718866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.724404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.724434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.724465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.730627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.730658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.730691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.830 [2024-11-20 10:00:10.736935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:33.830 [2024-11-20 10:00:10.736983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.830 [2024-11-20 10:00:10.737007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.090 [2024-11-20 10:00:10.743045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.090 [2024-11-20 10:00:10.743077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.090 [2024-11-20 10:00:10.743111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.090 [2024-11-20 10:00:10.748510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.090 [2024-11-20 10:00:10.748558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.090 [2024-11-20 10:00:10.748576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.090 [2024-11-20 10:00:10.753737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.090 [2024-11-20 10:00:10.753768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.090 [2024-11-20 10:00:10.753801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.090 [2024-11-20 10:00:10.759363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.090 [2024-11-20 10:00:10.759396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.090 [2024-11-20 10:00:10.759414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.090 [2024-11-20 10:00:10.764580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.764637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.764654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.769935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.769966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.769999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.775089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.775146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.775162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.780287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.780346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.780365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.785458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.785494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.785513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.790702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.790733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.790767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.795882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.795912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.795949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.801324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.801356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.801375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.806901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.806946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.806972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.812219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.812251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.812283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.818118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.818162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.818179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.823356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.823403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.823421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.828759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.828788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.828820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.833922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.833956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.833988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.839855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.839885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.839919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.091 5260.00 IOPS, 657.50 MiB/s [2024-11-20T09:00:11.005Z] [2024-11-20 10:00:10.848723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.848767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.848784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.855031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.855063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.855095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.860755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.860800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.860818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.866268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.866325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.866347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.872338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.872370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.872387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.879054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.879098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.879114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.886686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.886717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.886759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.894403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.894449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.894467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.901837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.901870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.901888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.909551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.909583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.909617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.917513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.917544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.917563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.091 [2024-11-20 10:00:10.925384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.091 [2024-11-20 10:00:10.925432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.091 [2024-11-20 10:00:10.925450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.933536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.933568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.933587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.941042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.941088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.941106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.947527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.947559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.947577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.952968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.952999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.953017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.958950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.958982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.959000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.964879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.964910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.964945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.971210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.971242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.971260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.977956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.977986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.978019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.984531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.984563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.984582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.992143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.992188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.992206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.092 [2024-11-20 10:00:10.997898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.092 [2024-11-20 10:00:10.997930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.092 [2024-11-20 10:00:10.997948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.003481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.003513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.003537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.008595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.008626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.008644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.014124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.014175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.019697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.019728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.019746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.026820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.026852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.026870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.035050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.035083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.035103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.041997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.042029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.042048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.048447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.048479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.048498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.052396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.052428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.052446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.059879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.059916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.059948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.066934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.066977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.066994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.075062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.075102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.075134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.082253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.082286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.082312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.088704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.088735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.088768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.094873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.094903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.094936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.100365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.100397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.100415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.105797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.105827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.105861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.111444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.111477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.111510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.116657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.116703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.116720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.122282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.122323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.122343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.127757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.127803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.127821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.396 [2024-11-20 10:00:11.132899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.396 [2024-11-20 10:00:11.132944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.396 [2024-11-20 10:00:11.132960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.138131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.138161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.138178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.143444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.143474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.143492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.148595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.148639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.148656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.153835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.153864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.153896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.158961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.159005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.159027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.164187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.164232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.164249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.169388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.169434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.169450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.174685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.174715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.174732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.180495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.180525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.180558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.187822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.187853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.187870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.194827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.194858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.194892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.200768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.200799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.200832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.206634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.206682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.206701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.213403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.213441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.213460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.220024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.220070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.220090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.225546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.225578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.225596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.230928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.230959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.230976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.236425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.236457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.236475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.242003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.242035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.242053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.247450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.247497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.247514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.252697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.252743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.252761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.258157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.258190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.258208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.261643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.261672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.261703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.267312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.267343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.267377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.273123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.273154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.273171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.280107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.280138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.280157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.397 [2024-11-20 10:00:11.287583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.397 [2024-11-20 10:00:11.287616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.397 [2024-11-20 10:00:11.287634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.672 [2024-11-20 10:00:11.294595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.672 [2024-11-20 10:00:11.294629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.672 [2024-11-20 10:00:11.294648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.299768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.299800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.299818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.305093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.305123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.305140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.310651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.310696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.310719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.317355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.317386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.317418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.324662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.324693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.324711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.330438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.330469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.330501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.336260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.336291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.336330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.341435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.341464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.341481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.346692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.346723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.346740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.352100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.352129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.352146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.357385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.357417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.363078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.363116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.363135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.368836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.368883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.368900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.374795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.374826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.374844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.380594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.380625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.380643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.385992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.386022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.386039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.391586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.391618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.391635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.397321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.397353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.397371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.403573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.403619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.403637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.409576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.409623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.409641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.415783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.415829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.415848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.420843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.420874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.420893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.426114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.673 [2024-11-20 10:00:11.426144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.673 [2024-11-20 10:00:11.426177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.673 [2024-11-20 10:00:11.431520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.431551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.431568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.437155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.437186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.437204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.442458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.442488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.442506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.447807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.447838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.447856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.453423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.453454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.453473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.459580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.459612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.459636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.466089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.466121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.466140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.471983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.472014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.472032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.477614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.477647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.477665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.482850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.482880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.482898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.488225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.488257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.488275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.493654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.493685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.493703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.498736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.498768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.498786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.504835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.504880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.504900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.508123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.508153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.508186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.514400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.514432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.514451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.520437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.520482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.520500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.526598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.526629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.526660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.532274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.532329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.532363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.536947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.536993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.537010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.542113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.542143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.674 [2024-11-20 10:00:11.542176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.674 [2024-11-20 10:00:11.547219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.674 [2024-11-20 10:00:11.547247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.675 [2024-11-20 10:00:11.547264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.675 [2024-11-20 10:00:11.552397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.675 [2024-11-20 10:00:11.552427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.675 [2024-11-20 10:00:11.552451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.675 [2024-11-20 10:00:11.557541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.675 [2024-11-20 10:00:11.557571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.675 [2024-11-20 10:00:11.557589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.675 [2024-11-20 10:00:11.562867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.675 [2024-11-20 10:00:11.562912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.675 [2024-11-20 10:00:11.562929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.675 [2024-11-20 10:00:11.568286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.675 [2024-11-20 10:00:11.568336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.675 [2024-11-20 10:00:11.568354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.675 [2024-11-20 10:00:11.573588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.675 [2024-11-20 10:00:11.573619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.675 [2024-11-20 10:00:11.573636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.675 [2024-11-20 10:00:11.578731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.675 [2024-11-20 10:00:11.578761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.675 [2024-11-20 10:00:11.578793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.583861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.583905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.583921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.588863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.588893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.588911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.593980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.594008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.594039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.599168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.599218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.599236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.604345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.604377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.604409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.609412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.609456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.609473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.614729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.614760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.614794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.619969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.620013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.620030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.625614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.625645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.625664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.631645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.631675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.631692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.636921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.636950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.636982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.642165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.642194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.642211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.647417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.647463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.647480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.652617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.652660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.652677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.657972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.658003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.658035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.663209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.663255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.663273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.668764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.668794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.668812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.675260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.675291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.675317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.681966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.681998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.682017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.687859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.687892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.687911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.693823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.693865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.693890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.699824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.934 [2024-11-20 10:00:11.699855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.934 [2024-11-20 10:00:11.699873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.934 [2024-11-20 10:00:11.705967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.705999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.706017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.711953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.711985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.712004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.718052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.718084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.718101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.722119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.722150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.722182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.729796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.729827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.729845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.736348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.736379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.736412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.743401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.743432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.743450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.749556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.749611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.749630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.755839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.755886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.755905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.761856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.761884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.761899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.767552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.767600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.767618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.773523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.773554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.773570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.779800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.779829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.779861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.786159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.786189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.786221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.793096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.793127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.793162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.799938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.799969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.800003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.805081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.805129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.805147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.810214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.810257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.810275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.815414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.815445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.815463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.820698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.820728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.820761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.826036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.826066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.826100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.831289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.831325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.831359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.836377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.836408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.836427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.935 [2024-11-20 10:00:11.841538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:34.935 [2024-11-20 10:00:11.841569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.935 [2024-11-20 10:00:11.841587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:35.193 [2024-11-20 10:00:11.846547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcbe2d0) 00:26:35.193 [2024-11-20 10:00:11.846578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.193 [2024-11-20 10:00:11.846602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:35.193 5274.00 IOPS, 659.25 MiB/s 00:26:35.193 Latency(us) 00:26:35.193 [2024-11-20T09:00:12.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.193 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:35.193 nvme0n1 : 2.00 5275.92 659.49 0.00 0.00 3028.48 634.12 13010.11 00:26:35.193 [2024-11-20T09:00:12.107Z] =================================================================================================================== 00:26:35.193 [2024-11-20T09:00:12.107Z] Total : 5275.92 659.49 0.00 0.00 3028.48 634.12 13010.11 00:26:35.193 { 00:26:35.193 "results": [ 00:26:35.193 { 00:26:35.193 "job": "nvme0n1", 00:26:35.193 "core_mask": "0x2", 00:26:35.193 "workload": "randread", 00:26:35.193 "status": "finished", 00:26:35.193 "queue_depth": 16, 00:26:35.193 "io_size": 131072, 00:26:35.193 "runtime": 2.002304, 00:26:35.193 "iops": 5275.922137697373, 00:26:35.193 "mibps": 659.4902672121716, 00:26:35.193 "io_failed": 0, 00:26:35.193 "io_timeout": 0, 00:26:35.193 "avg_latency_us": 3028.483554209264, 00:26:35.193 "min_latency_us": 634.1214814814815, 00:26:35.193 "max_latency_us": 13010.10962962963 00:26:35.193 } 00:26:35.193 ], 00:26:35.193 "core_count": 1 00:26:35.193 } 00:26:35.193 10:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:35.193 10:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:35.193 10:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:35.193 10:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:35.193 | .driver_specific 00:26:35.193 | .nvme_error 00:26:35.193 | .status_code 00:26:35.193 | .command_transient_transport_error' 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 341 > 0 )) 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3843026 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3843026 ']' 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3843026 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3843026 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3843026' 00:26:35.451 killing process with pid 3843026 00:26:35.451 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3843026 00:26:35.451 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.451 00:26:35.451 Latency(us) 00:26:35.451 [2024-11-20T09:00:12.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.451 [2024-11-20T09:00:12.365Z] =================================================================================================================== 00:26:35.451 [2024-11-20T09:00:12.366Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.452 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3843026 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3843524 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3843524 /var/tmp/bperf.sock 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3843524 ']' 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.709 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.709 [2024-11-20 10:00:12.430887] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:35.710 [2024-11-20 10:00:12.430970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843524 ] 00:26:35.710 [2024-11-20 10:00:12.496423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.710 [2024-11-20 10:00:12.554941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.968 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.968 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:35.968 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.968 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.226 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:36.226 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.226 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.226 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.226 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.226 10:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.791 nvme0n1 00:26:36.791 10:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:36.791 10:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.791 10:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.791 10:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.791 10:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:36.791 10:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.791 Running I/O for 2 seconds... 00:26:36.791 [2024-11-20 10:00:13.571788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e0a68 00:26:36.791 [2024-11-20 10:00:13.573018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.791 [2024-11-20 10:00:13.573076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:36.791 [2024-11-20 10:00:13.584061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e1b48 00:26:36.791 [2024-11-20 10:00:13.585312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.791 [2024-11-20 10:00:13.585344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:36.791 [2024-11-20 10:00:13.596203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fc998 00:26:36.791 [2024-11-20 10:00:13.597327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.791 [2024-11-20 10:00:13.597371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:36.791 [2024-11-20 10:00:13.607448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166de470 00:26:36.791 [2024-11-20 10:00:13.609102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.791 [2024-11-20 10:00:13.609132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:36.791 [2024-11-20 10:00:13.619911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fa3a0 00:26:36.791 [2024-11-20 10:00:13.621339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.791 [2024-11-20 10:00:13.621385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:36.791 [2024-11-20 10:00:13.630724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166df550 00:26:36.791 [2024-11-20 10:00:13.631793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.791 [2024-11-20 10:00:13.631823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:36.791 [2024-11-20 10:00:13.642114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f7970 00:26:36.791 [2024-11-20 10:00:13.643293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.791 [2024-11-20 10:00:13.643344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:36.791 [2024-11-20 10:00:13.654456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eb760 00:26:36.791 [2024-11-20 10:00:13.655720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.791 [2024-11-20 10:00:13.655771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:36.791 [2024-11-20 10:00:13.666403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f0350 00:26:36.792 [2024-11-20 10:00:13.667731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.792 [2024-11-20 10:00:13.667776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:36.792 [2024-11-20 10:00:13.677534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f46d0 00:26:36.792 [2024-11-20 10:00:13.678704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.792 [2024-11-20 10:00:13.678734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:36.792 [2024-11-20 10:00:13.689122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f7538 00:26:36.792 [2024-11-20 10:00:13.690358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.792 [2024-11-20 10:00:13.690387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:36.792 [2024-11-20 10:00:13.703073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fd208 00:26:37.049 [2024-11-20 10:00:13.705057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.049 [2024-11-20 10:00:13.705101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:37.049 [2024-11-20 10:00:13.711591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ff3c8 00:26:37.049 [2024-11-20 10:00:13.712282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.049 [2024-11-20 10:00:13.712332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:37.049 [2024-11-20 10:00:13.723819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f3e60 00:26:37.049 [2024-11-20 10:00:13.724957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.049 [2024-11-20 10:00:13.725002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:37.049 [2024-11-20 10:00:13.735808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ee5c8 00:26:37.049 [2024-11-20 10:00:13.736840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.049 [2024-11-20 10:00:13.736883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:37.049 [2024-11-20 10:00:13.747403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f4298 00:26:37.050 [2024-11-20 10:00:13.748043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.748085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.758550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e9168 00:26:37.050 [2024-11-20 10:00:13.759416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.759462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.771697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f46d0 00:26:37.050 [2024-11-20 10:00:13.772823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.772852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.784000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eaef0 00:26:37.050 [2024-11-20 10:00:13.785236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.785263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.795063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e84c0 00:26:37.050 [2024-11-20 10:00:13.796215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.796242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.807328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e49b0 00:26:37.050 [2024-11-20 10:00:13.808610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.808639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.819646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fe720 00:26:37.050 [2024-11-20 10:00:13.821065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.821108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.831909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6cc8 00:26:37.050 [2024-11-20 10:00:13.833843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.833888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.844189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e6b70 00:26:37.050 [2024-11-20 10:00:13.845860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.845905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.855421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ec840 00:26:37.050 [2024-11-20 10:00:13.856877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.856921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.866079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e4de8 00:26:37.050 [2024-11-20 10:00:13.867330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.867367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.877393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f0788 00:26:37.050 [2024-11-20 10:00:13.878355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.878404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.888602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e1b48 00:26:37.050 [2024-11-20 10:00:13.889476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.889509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.901045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f2510 00:26:37.050 [2024-11-20 10:00:13.902177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.902223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.915530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eea00 00:26:37.050 [2024-11-20 10:00:13.917253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.917284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.923921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166de8a8 00:26:37.050 [2024-11-20 10:00:13.924706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.924749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.935988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f0ff8 00:26:37.050 [2024-11-20 10:00:13.936814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.936860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:37.050 [2024-11-20 10:00:13.950148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6cc8 00:26:37.050 [2024-11-20 10:00:13.951517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.050 [2024-11-20 10:00:13.951564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:13.962846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fa7d8 00:26:37.308 [2024-11-20 10:00:13.964421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:13.964461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:13.974961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e27f0 00:26:37.308 [2024-11-20 10:00:13.976473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:13.976520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:13.985692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6890 00:26:37.308 [2024-11-20 10:00:13.986947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:13.986976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:13.997349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f1ca0 00:26:37.308 [2024-11-20 10:00:13.998463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:13.998509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.009620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ee5c8 00:26:37.308 [2024-11-20 10:00:14.011030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.011074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.021147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ed920 00:26:37.308 [2024-11-20 10:00:14.022403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.022433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.033022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ed920 00:26:37.308 [2024-11-20 10:00:14.034139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.034183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.044243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f1ca0 00:26:37.308 [2024-11-20 10:00:14.045343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.045390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.056561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f1868 00:26:37.308 [2024-11-20 10:00:14.057806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.057836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.068885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e3060 00:26:37.308 [2024-11-20 10:00:14.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.070331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.081141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f81e0 00:26:37.308 [2024-11-20 10:00:14.082737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.082783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.093083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eff18 00:26:37.308 [2024-11-20 10:00:14.094626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.094669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 10:00:14.103970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e2c28 00:26:37.308 [2024-11-20 10:00:14.105291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.308 [2024-11-20 10:00:14.105330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.115622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e8d30 00:26:37.309 [2024-11-20 10:00:14.116894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.116923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.129900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6458 00:26:37.309 [2024-11-20 10:00:14.131874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.131919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.138264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f57b0 00:26:37.309 [2024-11-20 10:00:14.139323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.139366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.150661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f7970 00:26:37.309 [2024-11-20 10:00:14.151805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.151850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.162490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ff3c8 00:26:37.309 [2024-11-20 10:00:14.163209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.163239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.177093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ef6a8 00:26:37.309 [2024-11-20 10:00:14.179048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.179077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.185518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f96f8 00:26:37.309 [2024-11-20 10:00:14.186381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.186427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.197370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ebfd0 00:26:37.309 [2024-11-20 10:00:14.198363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.198393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 10:00:14.209820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fc128 00:26:37.309 [2024-11-20 10:00:14.211079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.309 [2024-11-20 10:00:14.211123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:37.566 [2024-11-20 10:00:14.222238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f8e88 00:26:37.566 [2024-11-20 10:00:14.223647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.566 [2024-11-20 10:00:14.223680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:37.566 [2024-11-20 10:00:14.234715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ee5c8 00:26:37.566 [2024-11-20 10:00:14.236162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.566 [2024-11-20 10:00:14.236205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:37.566 [2024-11-20 10:00:14.247085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ebfd0 00:26:37.566 [2024-11-20 10:00:14.248726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.248755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.259512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e7c50 00:26:37.567 [2024-11-20 10:00:14.261324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.261361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.267914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fb480 00:26:37.567 [2024-11-20 10:00:14.268874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.268926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.282335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e6b70 00:26:37.567 [2024-11-20 10:00:14.283882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.283927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.293292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fe2e8 00:26:37.567 [2024-11-20 10:00:14.294491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.294521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.304798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ebb98 00:26:37.567 [2024-11-20 10:00:14.306089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.306118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.319035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f20d8 00:26:37.567 [2024-11-20 10:00:14.320907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.320952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.327410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166de470 00:26:37.567 [2024-11-20 10:00:14.328412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.328456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.339462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166feb58 00:26:37.567 [2024-11-20 10:00:14.340490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.340524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.351506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e88f8 00:26:37.567 [2024-11-20 10:00:14.352122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.352155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.366343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e01f8 00:26:37.567 [2024-11-20 10:00:14.368258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.368313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.374762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e5ec8 00:26:37.567 [2024-11-20 10:00:14.375749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.375794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.389051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f7100 00:26:37.567 [2024-11-20 10:00:14.390698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.390742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.399665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f0ff8 00:26:37.567 [2024-11-20 10:00:14.401580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.401609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.409625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f8618 00:26:37.567 [2024-11-20 10:00:14.410474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.410520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.423812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f0788 00:26:37.567 [2024-11-20 10:00:14.425258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.425310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.434628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fc128 00:26:37.567 [2024-11-20 10:00:14.435777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.435806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.446074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f0788 00:26:37.567 [2024-11-20 10:00:14.447327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.447371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.458020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e4140 00:26:37.567 [2024-11-20 10:00:14.458864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.458894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:37.567 [2024-11-20 10:00:14.469372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f8e88 00:26:37.567 [2024-11-20 10:00:14.470454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.567 [2024-11-20 10:00:14.470484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:37.826 [2024-11-20 10:00:14.482157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f0788 00:26:37.826 [2024-11-20 10:00:14.483516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.483560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:37.826 [2024-11-20 10:00:14.493009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f7da8 00:26:37.826 [2024-11-20 10:00:14.494105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.494134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:37.826 [2024-11-20 10:00:14.504519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6458 00:26:37.826 [2024-11-20 10:00:14.505589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.505632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:37.826 [2024-11-20 10:00:14.518638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f4298 00:26:37.826 [2024-11-20 10:00:14.520367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.520398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:37.826 [2024-11-20 10:00:14.526974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ebb98 00:26:37.826 [2024-11-20 10:00:14.527878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.527921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:37.826 [2024-11-20 10:00:14.541242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f1ca0 00:26:37.826 [2024-11-20 10:00:14.542718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.542763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:37.826 [2024-11-20 10:00:14.552184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e5a90 00:26:37.826 [2024-11-20 10:00:14.553293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.553329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:37.826 21408.00 IOPS, 83.62 MiB/s [2024-11-20T09:00:14.740Z] [2024-11-20 10:00:14.564813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f7538 00:26:37.826 [2024-11-20 10:00:14.566229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.566274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:37.826 [2024-11-20 10:00:14.576650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fc998 00:26:37.826 [2024-11-20 10:00:14.577762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.826 [2024-11-20 10:00:14.577813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.588515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6890 00:26:37.827 [2024-11-20 10:00:14.589831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.589862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.601036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f5378 00:26:37.827 [2024-11-20 10:00:14.602476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.602509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.611054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6cc8 00:26:37.827 [2024-11-20 10:00:14.611842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.611886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.624677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e49b0 00:26:37.827 [2024-11-20 10:00:14.626187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.626218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.634329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ef6a8 00:26:37.827 [2024-11-20 10:00:14.635149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.635180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.646266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ef6a8 00:26:37.827 [2024-11-20 10:00:14.647180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.647210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.659796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ef6a8 00:26:37.827 [2024-11-20 10:00:14.661228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.661275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.672187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fcdd0 00:26:37.827 [2024-11-20 10:00:14.673772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.673815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.684587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e6300 00:26:37.827 [2024-11-20 10:00:14.686260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.686292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.696925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e01f8 00:26:37.827 [2024-11-20 10:00:14.698891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.698934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.705369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fb048 00:26:37.827 [2024-11-20 10:00:14.706194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.706238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.720525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e4de8 00:26:37.827 [2024-11-20 10:00:14.722416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.722461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.827 [2024-11-20 10:00:14.728865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ef6a8 00:26:37.827 [2024-11-20 10:00:14.729781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.827 [2024-11-20 10:00:14.729824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.742581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6cc8 00:26:38.120 [2024-11-20 10:00:14.744076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.744122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.753413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f20d8 00:26:38.120 [2024-11-20 10:00:14.754750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.754780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.765239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e7818 00:26:38.120 [2024-11-20 10:00:14.766388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.766440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.779555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ee190 00:26:38.120 [2024-11-20 10:00:14.781344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.781388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.791875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fc560 00:26:38.120 [2024-11-20 10:00:14.793837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.793882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.800255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ef6a8 00:26:38.120 [2024-11-20 10:00:14.801279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.801329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.812327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ebb98 00:26:38.120 [2024-11-20 10:00:14.813314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.813358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.823763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eff18 00:26:38.120 [2024-11-20 10:00:14.824765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.824808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.835856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ea680 00:26:38.120 [2024-11-20 10:00:14.836927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.836970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.847269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ebb98 00:26:38.120 [2024-11-20 10:00:14.848180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.848210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.861260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e38d0 00:26:38.120 [2024-11-20 10:00:14.862590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.862621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.873193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e4578 00:26:38.120 [2024-11-20 10:00:14.874615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.874644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.884320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f4298 00:26:38.120 [2024-11-20 10:00:14.885347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.885398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.895156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166df550 00:26:38.120 [2024-11-20 10:00:14.896178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.896222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.909344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fd208 00:26:38.120 [2024-11-20 10:00:14.910983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.911026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.918821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eea00 00:26:38.120 [2024-11-20 10:00:14.919861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.919889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.933129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fc560 00:26:38.120 [2024-11-20 10:00:14.934651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.934697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.944336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fcdd0 00:26:38.120 [2024-11-20 10:00:14.945588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.945633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.955339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e4578 00:26:38.120 [2024-11-20 10:00:14.956492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.956536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.966489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f8618 00:26:38.120 [2024-11-20 10:00:14.967483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.967527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.977570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fda78 00:26:38.120 [2024-11-20 10:00:14.978413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.978443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:14.989815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ee190 00:26:38.120 [2024-11-20 10:00:14.991020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:14.991048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:15.002067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eb328 00:26:38.120 [2024-11-20 10:00:15.003345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:15.003388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:38.120 [2024-11-20 10:00:15.013793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e5220 00:26:38.120 [2024-11-20 10:00:15.014831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.120 [2024-11-20 10:00:15.014860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:38.121 [2024-11-20 10:00:15.025152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eee38 00:26:38.121 [2024-11-20 10:00:15.026387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.121 [2024-11-20 10:00:15.026418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.037126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e6b70 00:26:38.378 [2024-11-20 10:00:15.038346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.038375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.051410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e88f8 00:26:38.378 [2024-11-20 10:00:15.053196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.053225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.059774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fc128 00:26:38.378 [2024-11-20 10:00:15.060823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.060864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.071966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e2c28 00:26:38.378 [2024-11-20 10:00:15.072914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.072943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.084370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e8088 00:26:38.378 [2024-11-20 10:00:15.085381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.085411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.095272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f8e88 00:26:38.378 [2024-11-20 10:00:15.096284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.096335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.109618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e0a68 00:26:38.378 [2024-11-20 10:00:15.111318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.111348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.120590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f1ca0 00:26:38.378 [2024-11-20 10:00:15.122297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.122335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.130668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e4578 00:26:38.378 [2024-11-20 10:00:15.131459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.131489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.142640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ff3c8 00:26:38.378 [2024-11-20 10:00:15.143437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.143482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.155906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f1ca0 00:26:38.378 [2024-11-20 10:00:15.157463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.157493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.167601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ea248 00:26:38.378 [2024-11-20 10:00:15.168779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.168822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:38.378 [2024-11-20 10:00:15.179234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e9e10 00:26:38.378 [2024-11-20 10:00:15.180501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.378 [2024-11-20 10:00:15.180532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.191719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e2c28 00:26:38.379 [2024-11-20 10:00:15.193115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.193148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.203689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fe2e8 00:26:38.379 [2024-11-20 10:00:15.205052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.205079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.215156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e4de8 00:26:38.379 [2024-11-20 10:00:15.216231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.216260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.228750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166eb328 00:26:38.379 [2024-11-20 10:00:15.230553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.230584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.237016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6890 00:26:38.379 [2024-11-20 10:00:15.237849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.237893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.248743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fbcf0 00:26:38.379 [2024-11-20 10:00:15.249706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.249751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.262928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fa7d8 00:26:38.379 [2024-11-20 10:00:15.264497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.264543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.274026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e12d8 00:26:38.379 [2024-11-20 10:00:15.275363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.275393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.379 [2024-11-20 10:00:15.284696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f9f68 00:26:38.379 [2024-11-20 10:00:15.285790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.379 [2024-11-20 10:00:15.285832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.295893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f8618 00:26:38.636 [2024-11-20 10:00:15.296844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.296887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.310236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f4f40 00:26:38.636 [2024-11-20 10:00:15.311934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.311978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.321185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e9168 00:26:38.636 [2024-11-20 10:00:15.322563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.322592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.332946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e99d8 00:26:38.636 [2024-11-20 10:00:15.334319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.334347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.345390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e6738 00:26:38.636 [2024-11-20 10:00:15.346909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.346953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.355958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e84c0 00:26:38.636 [2024-11-20 10:00:15.357605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.357634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.368838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f2948 00:26:38.636 [2024-11-20 10:00:15.370356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.370386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.381128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e3498 00:26:38.636 [2024-11-20 10:00:15.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.393277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f35f0 00:26:38.636 [2024-11-20 10:00:15.394960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.394988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.401242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f31b8 00:26:38.636 [2024-11-20 10:00:15.402128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.402154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.413459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ed920 00:26:38.636 [2024-11-20 10:00:15.414397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.636 [2024-11-20 10:00:15.414427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:38.636 [2024-11-20 10:00:15.425501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f6020 00:26:38.637 [2024-11-20 10:00:15.426442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.426472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.439469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f4f40 00:26:38.637 [2024-11-20 10:00:15.441111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.441155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.449985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e5a90 00:26:38.637 [2024-11-20 10:00:15.451715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.451744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.461906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fb480 00:26:38.637 [2024-11-20 10:00:15.463402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.463431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.474538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f81e0 00:26:38.637 [2024-11-20 10:00:15.475967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.476014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.485973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f1430 00:26:38.637 [2024-11-20 10:00:15.487241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.487287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.495934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f7538 00:26:38.637 [2024-11-20 10:00:15.496714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.496762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.508043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f35f0 00:26:38.637 [2024-11-20 10:00:15.508942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.508969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.520504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166fb480 00:26:38.637 [2024-11-20 10:00:15.521725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.521769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.534646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166ddc00 00:26:38.637 [2024-11-20 10:00:15.536406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.536436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:38.637 [2024-11-20 10:00:15.543051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166e6738 00:26:38.637 [2024-11-20 10:00:15.543818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.637 [2024-11-20 10:00:15.543845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:38.893 [2024-11-20 10:00:15.554366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928220) with pdu=0x2000166f7538 00:26:38.893 [2024-11-20 10:00:15.555166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:38.893 [2024-11-20 10:00:15.555208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:38.893 21536.50 IOPS, 84.13 MiB/s 00:26:38.893 Latency(us) 00:26:38.893 [2024-11-20T09:00:15.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.893 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:38.893 nvme0n1 : 2.01 21546.92 84.17 0.00 0.00 5931.42 2451.53 16019.91 00:26:38.893 [2024-11-20T09:00:15.808Z] =================================================================================================================== 00:26:38.894 [2024-11-20T09:00:15.808Z] Total : 21546.92 84.17 0.00 0.00 5931.42 2451.53 16019.91 00:26:38.894 { 00:26:38.894 "results": [ 00:26:38.894 { 00:26:38.894 "job": "nvme0n1", 00:26:38.894 "core_mask": "0x2", 00:26:38.894 "workload": "randwrite", 00:26:38.894 "status": "finished", 00:26:38.894 "queue_depth": 128, 00:26:38.894 "io_size": 4096, 00:26:38.894 "runtime": 2.007944, 00:26:38.894 "iops": 21546.91565103409, 00:26:38.894 "mibps": 84.16763926185192, 00:26:38.894 "io_failed": 0, 00:26:38.894 "io_timeout": 0, 00:26:38.894 "avg_latency_us": 5931.418807949288, 00:26:38.894 "min_latency_us": 2451.531851851852, 00:26:38.894 "max_latency_us": 16019.91111111111 00:26:38.894 } 00:26:38.894 ], 00:26:38.894 "core_count": 1 00:26:38.894 } 00:26:38.894 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.894 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.894 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.894 | .driver_specific 00:26:38.894 | .nvme_error 00:26:38.894 | .status_code 00:26:38.894 | .command_transient_transport_error' 00:26:38.894 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3843524 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3843524 ']' 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3843524 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3843524 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3843524' 00:26:39.151 killing process with pid 3843524 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3843524 00:26:39.151 Received shutdown signal, test time was about 2.000000 seconds 00:26:39.151 00:26:39.151 Latency(us) 00:26:39.151 [2024-11-20T09:00:16.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.151 [2024-11-20T09:00:16.065Z] =================================================================================================================== 00:26:39.151 [2024-11-20T09:00:16.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.151 10:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3843524 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3843935 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3843935 /var/tmp/bperf.sock 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3843935 ']' 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.408 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.408 [2024-11-20 10:00:16.146279] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:39.408 [2024-11-20 10:00:16.146392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843935 ] 00:26:39.408 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.408 Zero copy mechanism will not be used. 00:26:39.408 [2024-11-20 10:00:16.212366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.408 [2024-11-20 10:00:16.270955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.665 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.665 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:39.665 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.665 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.923 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:39.923 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.923 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.923 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.923 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.923 10:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.180 nvme0n1 00:26:40.180 10:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:40.180 10:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.180 10:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.180 10:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.180 10:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:40.180 10:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.439 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.439 Zero copy mechanism will not be used. 00:26:40.439 Running I/O for 2 seconds... 00:26:40.439 [2024-11-20 10:00:17.182801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.182902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.182944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.188852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.188945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.188978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.194111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.194198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.194232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.199727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.199805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.199835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.205214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.205299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.205337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.210431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.210501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.210528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.215502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.215588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.215617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.220558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.220641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.220672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.225570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.225651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.225680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.230629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.230710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.230738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.236437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.236510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.236538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.241418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.241500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.241530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.246466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.246548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.246577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.251386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.251464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.251493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.256452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.256542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.256571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.261409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.261484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.261512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.266485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.266568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.266598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.271577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.271665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.271694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.276673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.276756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.276786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.281601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.439 [2024-11-20 10:00:17.281689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.439 [2024-11-20 10:00:17.281722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.439 [2024-11-20 10:00:17.286581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.286651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.286679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.291864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.291934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.291961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.297412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.297491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.297521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.303179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.303267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.303296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.308281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.308386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.308415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.313633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.313728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.313756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.318911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.318999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.319028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.323898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.323976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.324005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.328743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.328837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.328866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.333612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.333700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.333728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.338676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.338747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.338775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.343573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.343655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.343683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.440 [2024-11-20 10:00:17.348726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.440 [2024-11-20 10:00:17.348805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.440 [2024-11-20 10:00:17.348833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.698 [2024-11-20 10:00:17.354402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.698 [2024-11-20 10:00:17.354492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.698 [2024-11-20 10:00:17.354525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.698 [2024-11-20 10:00:17.359962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.698 [2024-11-20 10:00:17.360075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.698 [2024-11-20 10:00:17.360104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.698 [2024-11-20 10:00:17.367025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.698 [2024-11-20 10:00:17.367181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.698 [2024-11-20 10:00:17.367212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.698 [2024-11-20 10:00:17.373506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.698 [2024-11-20 10:00:17.373614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.373643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.380522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.380724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.380753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.387979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.388066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.388095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.394171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.394257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.394284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.400181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.400297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.400335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.405452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.405575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.405604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.410819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.410943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.410972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.416114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.416214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.416243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.421927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.422095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.422124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.428280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.428411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.428446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.434526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.434702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.434731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.440790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.440947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.440976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.447128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.447280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.447318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.454052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.454165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.454194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.460899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.461006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.461036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.466132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.466204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.466232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.471109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.471199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.471228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.475988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.476063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.476090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.481194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.481275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.481309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.486842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.486916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.486943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.491909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.491989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.492017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.496860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.496932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.496960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.501955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.502041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.502069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.507146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.507230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.507259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.512088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.512176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.512205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.516947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.517036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.517063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.521919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.699 [2024-11-20 10:00:17.521995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.699 [2024-11-20 10:00:17.522025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.699 [2024-11-20 10:00:17.526980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.527088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.527119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.532356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.532477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.532506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.538720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.538898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.538927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.544386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.544574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.544603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.551708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.551853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.551883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.557789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.558095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.558125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.563913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.564244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.564274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.569980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.570337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.570367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.575832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.576154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.576190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.581744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.582104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.582134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.587755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.588044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.588073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.594373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.594677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.594706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.601209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.601571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.601601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.700 [2024-11-20 10:00:17.607790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.700 [2024-11-20 10:00:17.608059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.700 [2024-11-20 10:00:17.608088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.614651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.614936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.614965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.621640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.621962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.621992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.628511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.628820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.628850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.635338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.635641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.635670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.641789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.642075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.642104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.647379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.647665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.647694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.652804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.653085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.653115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.657852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.658130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.658160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.664503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.664912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.664942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.670328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.670606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.670635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.674964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.675245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.675274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.679533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.679805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.679834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.684739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.685060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.685090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.690679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.691014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.691044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.695505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.695788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.695816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.700093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.700412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.700441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.704477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.704747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.704775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.708784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.709005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.709034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.713089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.713351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.713380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.717553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.717762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.717790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.721875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.722121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.959 [2024-11-20 10:00:17.722154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.959 [2024-11-20 10:00:17.726213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.959 [2024-11-20 10:00:17.726471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.726500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.730535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.730807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.730835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.734945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.735170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.735198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.739323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.739618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.739646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.744878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.745236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.749918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.750141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.750169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.756087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.756436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.756465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.762389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.762629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.762658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.767533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.767763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.767792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.772177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.772405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.772434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.776852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.777040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.777068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.781639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.781819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.781849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.786243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.786433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.786462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.790798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.790967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.790996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.795258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.795441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.795470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.799969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.800151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.800180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.804497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.804692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.804720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.809254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.809477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.809505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.814403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.814588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.814616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.818819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.818991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.819020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.822995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.823176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.823204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.827134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.827336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.827364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.831337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.831497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.831526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.835624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.835797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.835825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.840182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.840381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.840409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.844653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.844819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.844852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.960 [2024-11-20 10:00:17.849226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.960 [2024-11-20 10:00:17.849425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.960 [2024-11-20 10:00:17.849453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.961 [2024-11-20 10:00:17.853808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.961 [2024-11-20 10:00:17.853996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.961 [2024-11-20 10:00:17.854025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.961 [2024-11-20 10:00:17.858415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.961 [2024-11-20 10:00:17.858602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.961 [2024-11-20 10:00:17.858630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.961 [2024-11-20 10:00:17.863368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.961 [2024-11-20 10:00:17.863598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.961 [2024-11-20 10:00:17.863628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.961 [2024-11-20 10:00:17.869337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:40.961 [2024-11-20 10:00:17.869600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.961 [2024-11-20 10:00:17.869629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.874568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.874826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.220 [2024-11-20 10:00:17.874856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.879784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.879999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.220 [2024-11-20 10:00:17.880028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.884503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.884677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.220 [2024-11-20 10:00:17.884706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.889319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.889550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.220 [2024-11-20 10:00:17.889579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.894866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.895067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.220 [2024-11-20 10:00:17.895096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.900599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.900872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.220 [2024-11-20 10:00:17.900901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.906824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.907030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.220 [2024-11-20 10:00:17.907059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.911979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.912156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.220 [2024-11-20 10:00:17.912184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.220 [2024-11-20 10:00:17.916779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.220 [2024-11-20 10:00:17.916969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.916997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.921212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.921412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.921442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.925745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.925933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.925960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.930370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.930565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.930593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.934647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.934829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.934856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.939361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.939599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.939628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.944584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.944800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.944829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.949279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.949550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.949579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.955266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.955567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.955598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.960090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.960318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.960348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.964377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.964585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.964617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.968826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.969033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.969062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.973063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.973269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.973323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.977236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.977452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.977481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.982253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.982560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.982590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.987427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.987666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.987695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.992618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.992945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.992975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:17.998295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:17.998541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:17.998570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:18.002966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:18.003178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:18.003206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:18.007300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:18.007548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:18.007577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:18.011880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:18.012143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:18.012172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:18.016420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:18.016639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:18.016668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:18.020751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:18.020957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:18.020987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.221 [2024-11-20 10:00:18.024929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.221 [2024-11-20 10:00:18.025138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.221 [2024-11-20 10:00:18.025167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.029473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.029683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.029712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.034052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.034259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.034288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.038536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.038743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.038772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.043340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.043549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.043578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.047867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.048073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.048102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.052619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.052830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.052859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.057339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.057547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.057576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.061872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.062082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.062111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.066401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.066608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.066637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.071022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.071242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.071272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.076015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.076281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.076319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.080842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.081051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.081079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.085035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.085240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.085269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.089231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.089448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.089477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.093435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.093644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.093678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.097607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.097814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.097843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.101789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.101995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.102024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.105935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.106139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.106168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.110074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.110280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.110317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.114206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.114419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.114447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.118382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.118589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.118618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.122520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.122726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.122755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.126726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.126946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.126974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.222 [2024-11-20 10:00:18.130911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.222 [2024-11-20 10:00:18.131124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.222 [2024-11-20 10:00:18.131152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.481 [2024-11-20 10:00:18.135035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.481 [2024-11-20 10:00:18.135239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.481 [2024-11-20 10:00:18.135268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.481 [2024-11-20 10:00:18.139217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.481 [2024-11-20 10:00:18.139429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.481 [2024-11-20 10:00:18.139458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.481 [2024-11-20 10:00:18.143810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.481 [2024-11-20 10:00:18.144127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.481 [2024-11-20 10:00:18.144156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.481 [2024-11-20 10:00:18.149133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.481 [2024-11-20 10:00:18.149457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.481 [2024-11-20 10:00:18.149486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.481 [2024-11-20 10:00:18.154507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.154791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.154822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.160495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.160703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.160733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.166193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.166480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.166510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.171848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.172074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.172104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.482 5974.00 IOPS, 746.75 MiB/s [2024-11-20T09:00:18.396Z] [2024-11-20 10:00:18.179095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.179353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.179383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.183924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.184135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.184163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.188133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.188346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.188377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.192362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.192566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.192595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.196526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.196732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.196761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.201116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.201332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.201362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.205324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.205532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.205561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.210090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.210422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.210452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.215209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.215496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.215530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.220299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.220577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.220606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.225394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.225656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.225685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.230713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.230987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.231016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.236231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.236460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.236490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.242175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.242405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.242435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.247172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.247385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.247414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.252352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.252561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.252590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.256970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.257176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.257205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.261491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.261706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.261735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.266172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.266389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.266418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.270839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.271046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.271075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.275246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.275458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.275487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.279947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.280155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.482 [2024-11-20 10:00:18.280185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.482 [2024-11-20 10:00:18.284711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.482 [2024-11-20 10:00:18.284917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.284946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.289412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.289617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.289646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.293862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.294068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.294097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.298259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.298474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.298503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.302554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.302759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.302788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.307104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.307321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.307350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.311718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.311926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.311955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.316197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.316414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.316443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.320773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.320979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.321009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.325279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.325493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.325522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.329763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.329970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.329999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.334335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.334542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.334571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.338920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.339124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.339157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.343353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.343556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.343586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.347759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.347951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.347980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.352280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.352483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.352512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.356817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.357013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.357041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.361451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.361644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.361673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.366088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.366283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.366320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.370768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.370959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.370991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.375228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.375428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.375457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.379748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.379947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.379975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.384152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.384380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.483 [2024-11-20 10:00:18.388606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.483 [2024-11-20 10:00:18.388811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.483 [2024-11-20 10:00:18.388840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.742 [2024-11-20 10:00:18.393285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.742 [2024-11-20 10:00:18.393487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.742 [2024-11-20 10:00:18.393516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.742 [2024-11-20 10:00:18.397819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.742 [2024-11-20 10:00:18.398012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.742 [2024-11-20 10:00:18.398040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.742 [2024-11-20 10:00:18.402485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.742 [2024-11-20 10:00:18.402678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.742 [2024-11-20 10:00:18.402707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.742 [2024-11-20 10:00:18.407168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.742 [2024-11-20 10:00:18.407368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.742 [2024-11-20 10:00:18.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.742 [2024-11-20 10:00:18.411778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.742 [2024-11-20 10:00:18.411972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.742 [2024-11-20 10:00:18.412000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.416368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.416562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.416590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.421152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.421355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.421383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.425870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.426060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.426093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.430573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.430767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.430796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.435714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.435941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.435969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.440446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.440645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.440674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.445066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.445258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.445287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.450131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.450331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.450365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.454406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.454599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.454628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.458577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.458773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.458813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.462770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.462964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.462993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.466933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.467123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.467152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.471090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.471282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.471319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.475319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.475514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.475543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.479521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.479710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.479738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.483746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.483939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.483968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.487896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.488088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.488116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.492046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.492239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.492266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.496174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.496385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.496414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.500343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.500536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.500564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.504502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.504694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.504722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.508638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.508828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.508856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.512824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.513016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.513044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.516988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.517182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.517211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.521160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.521361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.521390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.525285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.525485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.525514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.529421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.743 [2024-11-20 10:00:18.529612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.743 [2024-11-20 10:00:18.529641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.743 [2024-11-20 10:00:18.533558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.533749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.533778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.537722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.537915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.537943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.541853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.542044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.542073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.545992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.546182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.546210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.550181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.550377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.550405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.554315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.554507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.554536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.558475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.558668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.558696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.562689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.562881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.562910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.566836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.567029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.567064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.570976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.571166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.571195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.575110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.575309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.575338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.579270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.579471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.579501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.583878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.584129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.584158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.588918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.589195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.589224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.594543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.594841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.594871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.600101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.600332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.600362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.605293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.605522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.605551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.610488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.610739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.610768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.615587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.615808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.615837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.620754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.621023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.621052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.625738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.625972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.626002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.630899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.631184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.631213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.636031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.636259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.636289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.641223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.641479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.641509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.646313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.646547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.646576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.744 [2024-11-20 10:00:18.651497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:41.744 [2024-11-20 10:00:18.651756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.744 [2024-11-20 10:00:18.651785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.656620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.656835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.656865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.661741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.662045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.662074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.666817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.667134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.667162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.671932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.672130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.672159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.676188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.676386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.676415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.680423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.680609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.680638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.684672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.684849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.684878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.689043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.689229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.689258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.694491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.003 [2024-11-20 10:00:18.694700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.003 [2024-11-20 10:00:18.694735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.003 [2024-11-20 10:00:18.698904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.699105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.699134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.703673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.703895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.703924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.708832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.709098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.709127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.713917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.714168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.714196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.720129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.720446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.720476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.725391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.725637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.725667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.730561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.730848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.730878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.735690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.735913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.735942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.740821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.741130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.741160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.746021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.746339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.746369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.751159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.751423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.751452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.756362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.756629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.756659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.761419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.761697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.761726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.766535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.766796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.766824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.771742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.771996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.772025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.776817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.777072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.777100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.781993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.782275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.782315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.787222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.787489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.787519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.792337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.792574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.792603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.797437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.797648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.797682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.802535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.802851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.802880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.807619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.807940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.807969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.812625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.812803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.812831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.817706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.817877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.817906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.822803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.822954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.822983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.827855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.828029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.828063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.004 [2024-11-20 10:00:18.832973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.004 [2024-11-20 10:00:18.833118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.004 [2024-11-20 10:00:18.833147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.838044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.838198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.838227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.843090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.843271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.843299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.848220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.848359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.848388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.853362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.853567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.853595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.858472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.858634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.858664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.863595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.863738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.863767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.868788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.868930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.868958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.873920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.874130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.874159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.878945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.879096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.879125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.883993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.884161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.884190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.889088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.889256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.889286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.894242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.894338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.894367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.899292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.899414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.899442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.904422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.904612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.904642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.909512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.005 [2024-11-20 10:00:18.909691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.005 [2024-11-20 10:00:18.909719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.005 [2024-11-20 10:00:18.914646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.264 [2024-11-20 10:00:18.914782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.264 [2024-11-20 10:00:18.914811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.264 [2024-11-20 10:00:18.919769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.264 [2024-11-20 10:00:18.919869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.264 [2024-11-20 10:00:18.919897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.264 [2024-11-20 10:00:18.924804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.264 [2024-11-20 10:00:18.924946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.264 [2024-11-20 10:00:18.924976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.929963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.930146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.930176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.935100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.935232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.935261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.940293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.940441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.940470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.945368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.945523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.945552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.950436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.950604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.950633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.955539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.955683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.955712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.960612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.960764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.960799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.965641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.965844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.965873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.970902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.971051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.971080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.975990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.976128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.976157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.981061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.981213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.981242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.986132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.986295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.986334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.991315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.991501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.991529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:18.996407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:18.996576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:18.996605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.001587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.001781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.001810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.006677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.006854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.006883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.011719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.011829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.011857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.016868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.017031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.017060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.022028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.022222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.022251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.027072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.027187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.027215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.032456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.032636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.032664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.037572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.037721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.037749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.042614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.042805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.042833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.047754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.047890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.047918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.052909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.053084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.053113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.057914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.058050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.265 [2024-11-20 10:00:19.058078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.265 [2024-11-20 10:00:19.062905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.265 [2024-11-20 10:00:19.063009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.063037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.068050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.068210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.068238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.073083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.073288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.073326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.078100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.078245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.078274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.083236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.083450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.083479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.088342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.088536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.088565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.093460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.093596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.093634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.098610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.098803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.103735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.103904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.103932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.108929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.109112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.109141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.114020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.114182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.114210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.119196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.119403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.119432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.124203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.124353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.124382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.129232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.129432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.129461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.134476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.134636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.134665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.139547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.139714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.139742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.144603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.144802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.144830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.149659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.149878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.149909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.154659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.154841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.154870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.159889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.160048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.160077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.164935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.165112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.165141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.170050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.170203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.170232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.266 [2024-11-20 10:00:19.175068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.266 [2024-11-20 10:00:19.175281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.266 [2024-11-20 10:00:19.175317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.524 6194.00 IOPS, 774.25 MiB/s [2024-11-20T09:00:19.438Z] [2024-11-20 10:00:19.181502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1928560) with pdu=0x2000166ff3c8 00:26:42.524 [2024-11-20 10:00:19.181684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.524 [2024-11-20 10:00:19.181713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.524 00:26:42.524 Latency(us) 00:26:42.524 [2024-11-20T09:00:19.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.524 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:42.524 nvme0n1 : 2.00 6190.87 773.86 0.00 0.00 2577.52 1905.40 12524.66 00:26:42.524 [2024-11-20T09:00:19.438Z] =================================================================================================================== 00:26:42.524 [2024-11-20T09:00:19.438Z] Total : 6190.87 773.86 0.00 0.00 2577.52 1905.40 12524.66 00:26:42.524 { 00:26:42.524 "results": [ 00:26:42.524 { 00:26:42.524 "job": "nvme0n1", 00:26:42.524 "core_mask": "0x2", 00:26:42.524 "workload": "randwrite", 00:26:42.524 "status": "finished", 00:26:42.524 "queue_depth": 16, 00:26:42.524 "io_size": 131072, 00:26:42.524 "runtime": 2.004404, 00:26:42.524 "iops": 6190.867709304112, 00:26:42.524 "mibps": 773.858463663014, 00:26:42.524 "io_failed": 0, 00:26:42.524 "io_timeout": 0, 00:26:42.524 "avg_latency_us": 2577.5179329220427, 00:26:42.524 "min_latency_us": 1905.3985185185186, 00:26:42.524 "max_latency_us": 12524.657777777778 00:26:42.524 } 00:26:42.524 ], 00:26:42.524 "core_count": 1 00:26:42.524 } 00:26:42.524 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:42.524 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:42.524 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:42.524 | .driver_specific 00:26:42.524 | .nvme_error 00:26:42.524 | .status_code 00:26:42.524 | .command_transient_transport_error' 00:26:42.524 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 401 > 0 )) 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3843935 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3843935 ']' 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3843935 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3843935 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3843935' 00:26:42.782 killing process with pid 3843935 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3843935 00:26:42.782 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.782 00:26:42.782 Latency(us) 00:26:42.782 [2024-11-20T09:00:19.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.782 [2024-11-20T09:00:19.696Z] =================================================================================================================== 00:26:42.782 [2024-11-20T09:00:19.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.782 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3843935 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3842060 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3842060 ']' 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3842060 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3842060 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3842060' 00:26:43.041 killing process with pid 3842060 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3842060 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3842060 00:26:43.041 00:26:43.041 real 0m15.192s 00:26:43.041 user 0m30.516s 00:26:43.041 sys 0m4.196s 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.041 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.041 ************************************ 00:26:43.041 END TEST nvmf_digest_error 00:26:43.041 ************************************ 00:26:43.301 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:43.301 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:43.301 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:43.301 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:43.301 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:43.301 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:43.301 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:43.301 10:00:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:43.301 rmmod nvme_tcp 00:26:43.301 rmmod nvme_fabrics 00:26:43.301 rmmod nvme_keyring 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3842060 ']' 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3842060 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3842060 ']' 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3842060 00:26:43.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3842060) - No such process 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3842060 is not found' 00:26:43.301 Process with pid 3842060 is not found 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.301 10:00:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:45.206 00:26:45.206 real 0m35.994s 00:26:45.206 user 1m3.682s 00:26:45.206 sys 0m10.208s 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.206 ************************************ 00:26:45.206 END TEST nvmf_digest 00:26:45.206 ************************************ 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.206 10:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.466 ************************************ 00:26:45.466 START TEST nvmf_bdevperf 00:26:45.466 ************************************ 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:45.466 * Looking for test storage... 00:26:45.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.466 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:45.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.467 --rc genhtml_branch_coverage=1 00:26:45.467 --rc genhtml_function_coverage=1 00:26:45.467 --rc genhtml_legend=1 00:26:45.467 --rc geninfo_all_blocks=1 00:26:45.467 --rc geninfo_unexecuted_blocks=1 00:26:45.467 00:26:45.467 ' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:45.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.467 --rc genhtml_branch_coverage=1 00:26:45.467 --rc genhtml_function_coverage=1 00:26:45.467 --rc genhtml_legend=1 00:26:45.467 --rc geninfo_all_blocks=1 00:26:45.467 --rc geninfo_unexecuted_blocks=1 00:26:45.467 00:26:45.467 ' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:45.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.467 --rc genhtml_branch_coverage=1 00:26:45.467 --rc genhtml_function_coverage=1 00:26:45.467 --rc genhtml_legend=1 00:26:45.467 --rc geninfo_all_blocks=1 00:26:45.467 --rc geninfo_unexecuted_blocks=1 00:26:45.467 00:26:45.467 ' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:45.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.467 --rc genhtml_branch_coverage=1 00:26:45.467 --rc genhtml_function_coverage=1 00:26:45.467 --rc genhtml_legend=1 00:26:45.467 --rc geninfo_all_blocks=1 00:26:45.467 --rc geninfo_unexecuted_blocks=1 00:26:45.467 00:26:45.467 ' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.467 10:00:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:48.012 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:48.012 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:48.012 Found net devices under 0000:09:00.0: cvl_0_0 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.012 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:48.013 Found net devices under 0000:09:00.1: cvl_0_1 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:26:48.013 00:26:48.013 --- 10.0.0.2 ping statistics --- 00:26:48.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.013 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:26:48.013 00:26:48.013 --- 10.0.0.1 ping statistics --- 00:26:48.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.013 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3846404 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3846404 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3846404 ']' 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.013 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.013 [2024-11-20 10:00:24.707192] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:48.013 [2024-11-20 10:00:24.707285] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.013 [2024-11-20 10:00:24.781504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:48.013 [2024-11-20 10:00:24.841972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.013 [2024-11-20 10:00:24.842024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.013 [2024-11-20 10:00:24.842052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.013 [2024-11-20 10:00:24.842063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.013 [2024-11-20 10:00:24.842073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.013 [2024-11-20 10:00:24.843807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.013 [2024-11-20 10:00:24.843852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.013 [2024-11-20 10:00:24.843856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.271 [2024-11-20 10:00:24.994001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.271 10:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.271 Malloc0 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.271 [2024-11-20 10:00:25.060876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:48.271 { 00:26:48.271 "params": { 00:26:48.271 "name": "Nvme$subsystem", 00:26:48.271 "trtype": "$TEST_TRANSPORT", 00:26:48.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:48.271 "adrfam": "ipv4", 00:26:48.271 "trsvcid": "$NVMF_PORT", 00:26:48.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:48.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:48.271 "hdgst": ${hdgst:-false}, 00:26:48.271 "ddgst": ${ddgst:-false} 00:26:48.271 }, 00:26:48.271 "method": "bdev_nvme_attach_controller" 00:26:48.271 } 00:26:48.271 EOF 00:26:48.271 )") 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:48.271 10:00:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:48.271 "params": { 00:26:48.271 "name": "Nvme1", 00:26:48.271 "trtype": "tcp", 00:26:48.271 "traddr": "10.0.0.2", 00:26:48.271 "adrfam": "ipv4", 00:26:48.271 "trsvcid": "4420", 00:26:48.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:48.271 "hdgst": false, 00:26:48.271 "ddgst": false 00:26:48.271 }, 00:26:48.271 "method": "bdev_nvme_attach_controller" 00:26:48.271 }' 00:26:48.271 [2024-11-20 10:00:25.113323] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:48.271 [2024-11-20 10:00:25.113394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846439 ] 00:26:48.528 [2024-11-20 10:00:25.184133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.528 [2024-11-20 10:00:25.249252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.784 Running I/O for 1 seconds... 00:26:49.719 8584.00 IOPS, 33.53 MiB/s 00:26:49.719 Latency(us) 00:26:49.719 [2024-11-20T09:00:26.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:49.719 Verification LBA range: start 0x0 length 0x4000 00:26:49.719 Nvme1n1 : 1.01 8618.47 33.67 0.00 0.00 14788.39 3325.35 15049.01 00:26:49.719 [2024-11-20T09:00:26.633Z] =================================================================================================================== 00:26:49.719 [2024-11-20T09:00:26.633Z] Total : 8618.47 33.67 0.00 0.00 14788.39 3325.35 15049.01 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3846600 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:49.977 { 00:26:49.977 "params": { 00:26:49.977 "name": "Nvme$subsystem", 00:26:49.977 "trtype": "$TEST_TRANSPORT", 00:26:49.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.977 "adrfam": "ipv4", 00:26:49.977 "trsvcid": "$NVMF_PORT", 00:26:49.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.977 "hdgst": ${hdgst:-false}, 00:26:49.977 "ddgst": ${ddgst:-false} 00:26:49.977 }, 00:26:49.977 "method": "bdev_nvme_attach_controller" 00:26:49.977 } 00:26:49.977 EOF 00:26:49.977 )") 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:49.977 10:00:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:49.977 "params": { 00:26:49.977 "name": "Nvme1", 00:26:49.977 "trtype": "tcp", 00:26:49.977 "traddr": "10.0.0.2", 00:26:49.977 "adrfam": "ipv4", 00:26:49.977 "trsvcid": "4420", 00:26:49.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.977 "hdgst": false, 00:26:49.977 "ddgst": false 00:26:49.977 }, 00:26:49.978 "method": "bdev_nvme_attach_controller" 00:26:49.978 }' 00:26:49.978 [2024-11-20 10:00:26.776481] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:49.978 [2024-11-20 10:00:26.776567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846600 ] 00:26:49.978 [2024-11-20 10:00:26.848426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.235 [2024-11-20 10:00:26.908670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.493 Running I/O for 15 seconds... 00:26:52.362 8615.00 IOPS, 33.65 MiB/s [2024-11-20T09:00:29.846Z] 8667.00 IOPS, 33.86 MiB/s [2024-11-20T09:00:29.846Z] 10:00:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3846404 00:26:52.932 10:00:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:52.932 [2024-11-20 10:00:29.740676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.740724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.740767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.740784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.740802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.740829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.740846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.740862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.740878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.740907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.740922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.740937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.740952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.740980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.740995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.932 [2024-11-20 10:00:29.741809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.932 [2024-11-20 10:00:29.741822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.741835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.741848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.741861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.741874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.741887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.741900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.741914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.741926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.741952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.741965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.741978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.741991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.933 [2024-11-20 10:00:29.742218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.933 [2024-11-20 10:00:29.742968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.933 [2024-11-20 10:00:29.742982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.742994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.743978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.743990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.744004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.744017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.744030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.744043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.744057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.744070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.744083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.744096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.744110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.744123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.934 [2024-11-20 10:00:29.744137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.934 [2024-11-20 10:00:29.744150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.935 [2024-11-20 10:00:29.744632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.744646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fbba0 is same with the state(6) to be set 00:26:52.935 [2024-11-20 10:00:29.744661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:52.935 [2024-11-20 10:00:29.744686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:52.935 [2024-11-20 10:00:29.744697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43480 len:8 PRP1 0x0 PRP2 0x0 00:26:52.935 [2024-11-20 10:00:29.744711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.935 [2024-11-20 10:00:29.747842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.935 [2024-11-20 10:00:29.747918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:52.935 [2024-11-20 10:00:29.748818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.935 [2024-11-20 10:00:29.748848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:52.935 [2024-11-20 10:00:29.748865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:52.935 [2024-11-20 10:00:29.749110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:52.935 [2024-11-20 10:00:29.749355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.935 [2024-11-20 10:00:29.749378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.935 [2024-11-20 10:00:29.749395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.935 [2024-11-20 10:00:29.749410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.935 [2024-11-20 10:00:29.761259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.935 [2024-11-20 10:00:29.761660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.935 [2024-11-20 10:00:29.761703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:52.935 [2024-11-20 10:00:29.761719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:52.935 [2024-11-20 10:00:29.761988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:52.935 [2024-11-20 10:00:29.762183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.935 [2024-11-20 10:00:29.762202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.935 [2024-11-20 10:00:29.762214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.935 [2024-11-20 10:00:29.762225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.935 [2024-11-20 10:00:29.774322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.935 [2024-11-20 10:00:29.774820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.935 [2024-11-20 10:00:29.774864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:52.935 [2024-11-20 10:00:29.774880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:52.935 [2024-11-20 10:00:29.775146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:52.935 [2024-11-20 10:00:29.775385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.935 [2024-11-20 10:00:29.775407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.935 [2024-11-20 10:00:29.775420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.935 [2024-11-20 10:00:29.775432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.935 [2024-11-20 10:00:29.787517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.935 [2024-11-20 10:00:29.787901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.935 [2024-11-20 10:00:29.787929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:52.935 [2024-11-20 10:00:29.787944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:52.935 [2024-11-20 10:00:29.788179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:52.935 [2024-11-20 10:00:29.788436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.935 [2024-11-20 10:00:29.788457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.935 [2024-11-20 10:00:29.788471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.935 [2024-11-20 10:00:29.788483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.935 [2024-11-20 10:00:29.800577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.935 [2024-11-20 10:00:29.800959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.935 [2024-11-20 10:00:29.801000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:52.935 [2024-11-20 10:00:29.801015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:52.935 [2024-11-20 10:00:29.801239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:52.935 [2024-11-20 10:00:29.801487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.935 [2024-11-20 10:00:29.801509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.935 [2024-11-20 10:00:29.801522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.935 [2024-11-20 10:00:29.801534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.935 [2024-11-20 10:00:29.813641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.935 [2024-11-20 10:00:29.814001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.935 [2024-11-20 10:00:29.814028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:52.935 [2024-11-20 10:00:29.814044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:52.936 [2024-11-20 10:00:29.814278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:52.936 [2024-11-20 10:00:29.814506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.936 [2024-11-20 10:00:29.814528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.936 [2024-11-20 10:00:29.814540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.936 [2024-11-20 10:00:29.814552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.936 [2024-11-20 10:00:29.826727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.936 [2024-11-20 10:00:29.827222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.936 [2024-11-20 10:00:29.827270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:52.936 [2024-11-20 10:00:29.827287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:52.936 [2024-11-20 10:00:29.827511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:52.936 [2024-11-20 10:00:29.827750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.936 [2024-11-20 10:00:29.827769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.936 [2024-11-20 10:00:29.827781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.936 [2024-11-20 10:00:29.827792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.936 [2024-11-20 10:00:29.840439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.936 [2024-11-20 10:00:29.840799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.936 [2024-11-20 10:00:29.840828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:52.936 [2024-11-20 10:00:29.840845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.195 [2024-11-20 10:00:29.841073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.195 [2024-11-20 10:00:29.841323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.195 [2024-11-20 10:00:29.841346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.195 [2024-11-20 10:00:29.841359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.195 [2024-11-20 10:00:29.841372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.195 [2024-11-20 10:00:29.853770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.195 [2024-11-20 10:00:29.854132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.195 [2024-11-20 10:00:29.854160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.195 [2024-11-20 10:00:29.854175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.195 [2024-11-20 10:00:29.854428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.195 [2024-11-20 10:00:29.854671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.195 [2024-11-20 10:00:29.854690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.195 [2024-11-20 10:00:29.854703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.195 [2024-11-20 10:00:29.854714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.195 [2024-11-20 10:00:29.866883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.195 [2024-11-20 10:00:29.867372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.195 [2024-11-20 10:00:29.867415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.867431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.867701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.867896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.867915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.867926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.867938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.879974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.880466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.880509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.880526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.880779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.880973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.880991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.881003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.881014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.893118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.893491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.893534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.893550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.893803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.894013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.894031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.894043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.894054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.906152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.906550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.906594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.906610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.906844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.907055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.907074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.907091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.907103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.919346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.919711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.919738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.919753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.919989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.920183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.920202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.920214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.920226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.932378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.932743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.932785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.932800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.933052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.933261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.933280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.933292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.933327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.945520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.945903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.945947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.945963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.946220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.946461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.946482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.946495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.946507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.958731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.959085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.959146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.959162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.959407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.959623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.959642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.959654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.959665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.971980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.972380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.972409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.972424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.972655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.972883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.972903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.972915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.972926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.985218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.985643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.196 [2024-11-20 10:00:29.985670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.196 [2024-11-20 10:00:29.985685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.196 [2024-11-20 10:00:29.985900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.196 [2024-11-20 10:00:29.986111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.196 [2024-11-20 10:00:29.986130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.196 [2024-11-20 10:00:29.986142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.196 [2024-11-20 10:00:29.986153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.196 [2024-11-20 10:00:29.998380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.196 [2024-11-20 10:00:29.998790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.197 [2024-11-20 10:00:29.998823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.197 [2024-11-20 10:00:29.998840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.197 [2024-11-20 10:00:29.999069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.197 [2024-11-20 10:00:29.999314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.197 [2024-11-20 10:00:29.999334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.197 [2024-11-20 10:00:29.999362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.197 [2024-11-20 10:00:29.999375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.197 [2024-11-20 10:00:30.012516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.197 [2024-11-20 10:00:30.012871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.197 [2024-11-20 10:00:30.012903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.197 [2024-11-20 10:00:30.012920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.197 [2024-11-20 10:00:30.013137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.197 [2024-11-20 10:00:30.013371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.197 [2024-11-20 10:00:30.013394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.197 [2024-11-20 10:00:30.013408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.197 [2024-11-20 10:00:30.013421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.197 [2024-11-20 10:00:30.026190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.197 [2024-11-20 10:00:30.026583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.197 [2024-11-20 10:00:30.026613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.197 [2024-11-20 10:00:30.026630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.197 [2024-11-20 10:00:30.026861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.197 [2024-11-20 10:00:30.027089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.197 [2024-11-20 10:00:30.027110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.197 [2024-11-20 10:00:30.027124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.197 [2024-11-20 10:00:30.027136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.197 [2024-11-20 10:00:30.040209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.197 [2024-11-20 10:00:30.040630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.197 [2024-11-20 10:00:30.040662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.197 [2024-11-20 10:00:30.040679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.197 [2024-11-20 10:00:30.040932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.197 [2024-11-20 10:00:30.041134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.197 [2024-11-20 10:00:30.041154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.197 [2024-11-20 10:00:30.041168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.197 [2024-11-20 10:00:30.041195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.197 [2024-11-20 10:00:30.053498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.197 [2024-11-20 10:00:30.053882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.197 [2024-11-20 10:00:30.053910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.197 [2024-11-20 10:00:30.053926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.197 [2024-11-20 10:00:30.054161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.197 [2024-11-20 10:00:30.054402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.197 [2024-11-20 10:00:30.054423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.197 [2024-11-20 10:00:30.054436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.197 [2024-11-20 10:00:30.054448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.197 [2024-11-20 10:00:30.066915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.197 [2024-11-20 10:00:30.067253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.197 [2024-11-20 10:00:30.067282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.197 [2024-11-20 10:00:30.067332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.197 [2024-11-20 10:00:30.067549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.197 [2024-11-20 10:00:30.067790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.197 [2024-11-20 10:00:30.067810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.197 [2024-11-20 10:00:30.067823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.197 [2024-11-20 10:00:30.067835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.197 [2024-11-20 10:00:30.080450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.197 [2024-11-20 10:00:30.080890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.197 [2024-11-20 10:00:30.080938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.197 [2024-11-20 10:00:30.080954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.197 [2024-11-20 10:00:30.081195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.197 [2024-11-20 10:00:30.081459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.197 [2024-11-20 10:00:30.081481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.197 [2024-11-20 10:00:30.081500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.197 [2024-11-20 10:00:30.081513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.197 [2024-11-20 10:00:30.093962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.197 [2024-11-20 10:00:30.094352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.197 [2024-11-20 10:00:30.094382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.197 [2024-11-20 10:00:30.094399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.197 [2024-11-20 10:00:30.094629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.197 [2024-11-20 10:00:30.094839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.197 [2024-11-20 10:00:30.094859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.197 [2024-11-20 10:00:30.094871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.197 [2024-11-20 10:00:30.094882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.107813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.458 [2024-11-20 10:00:30.108326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.458 [2024-11-20 10:00:30.108371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.458 [2024-11-20 10:00:30.108387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.458 [2024-11-20 10:00:30.108622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.458 [2024-11-20 10:00:30.108832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.458 [2024-11-20 10:00:30.108851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.458 [2024-11-20 10:00:30.108863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.458 [2024-11-20 10:00:30.108874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.121173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.458 [2024-11-20 10:00:30.121571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.458 [2024-11-20 10:00:30.121600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.458 [2024-11-20 10:00:30.121617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.458 [2024-11-20 10:00:30.121847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.458 [2024-11-20 10:00:30.122064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.458 [2024-11-20 10:00:30.122083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.458 [2024-11-20 10:00:30.122095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.458 [2024-11-20 10:00:30.122106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.134499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.458 [2024-11-20 10:00:30.134815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.458 [2024-11-20 10:00:30.134904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.458 [2024-11-20 10:00:30.134920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.458 [2024-11-20 10:00:30.135156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.458 [2024-11-20 10:00:30.135384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.458 [2024-11-20 10:00:30.135406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.458 [2024-11-20 10:00:30.135419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.458 [2024-11-20 10:00:30.135431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.147846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.458 [2024-11-20 10:00:30.148171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.458 [2024-11-20 10:00:30.148198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.458 [2024-11-20 10:00:30.148213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.458 [2024-11-20 10:00:30.148459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.458 [2024-11-20 10:00:30.148689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.458 [2024-11-20 10:00:30.148709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.458 [2024-11-20 10:00:30.148721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.458 [2024-11-20 10:00:30.148732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.161156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.458 [2024-11-20 10:00:30.161537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.458 [2024-11-20 10:00:30.161566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.458 [2024-11-20 10:00:30.161582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.458 [2024-11-20 10:00:30.161811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.458 [2024-11-20 10:00:30.162021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.458 [2024-11-20 10:00:30.162040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.458 [2024-11-20 10:00:30.162052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.458 [2024-11-20 10:00:30.162063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.174466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.458 [2024-11-20 10:00:30.174892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.458 [2024-11-20 10:00:30.174962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.458 [2024-11-20 10:00:30.174978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.458 [2024-11-20 10:00:30.175228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.458 [2024-11-20 10:00:30.175468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.458 [2024-11-20 10:00:30.175489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.458 [2024-11-20 10:00:30.175501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.458 [2024-11-20 10:00:30.175512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.187858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.458 [2024-11-20 10:00:30.188222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.458 [2024-11-20 10:00:30.188250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.458 [2024-11-20 10:00:30.188266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.458 [2024-11-20 10:00:30.188531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.458 [2024-11-20 10:00:30.188797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.458 [2024-11-20 10:00:30.188817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.458 [2024-11-20 10:00:30.188830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.458 [2024-11-20 10:00:30.188842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.201330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.458 [2024-11-20 10:00:30.201702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.458 [2024-11-20 10:00:30.201766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.458 [2024-11-20 10:00:30.201807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.458 [2024-11-20 10:00:30.202061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.458 [2024-11-20 10:00:30.202323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.458 [2024-11-20 10:00:30.202343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.458 [2024-11-20 10:00:30.202356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.458 [2024-11-20 10:00:30.202367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.458 [2024-11-20 10:00:30.214685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.215057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.215085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.215101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.215363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.215570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.215610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.215622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.215634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 [2024-11-20 10:00:30.228007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.228403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.228447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.228463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.228716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.228924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.228944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.228956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.228968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 7257.00 IOPS, 28.35 MiB/s [2024-11-20T09:00:30.373Z] [2024-11-20 10:00:30.241568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.242004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.242046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.242062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.242292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.242543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.242563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.242575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.242587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 [2024-11-20 10:00:30.255007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.255372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.255401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.255417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.255632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.255890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.255916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.255930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.255942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 [2024-11-20 10:00:30.268529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.268888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.268916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.268931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.269172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.269425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.269462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.269474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.269486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 [2024-11-20 10:00:30.281883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.282282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.282317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.282349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.282578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.282809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.282828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.282840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.282851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 [2024-11-20 10:00:30.295366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.295756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.295799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.295815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.296067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.296276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.296295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.296332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.296346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 [2024-11-20 10:00:30.308638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.309057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.309085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.309100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.309342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.309571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.309592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.309605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.309633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 [2024-11-20 10:00:30.321957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.322386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.322414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.322430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.322665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.322875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.322895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.322907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.459 [2024-11-20 10:00:30.322917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.459 [2024-11-20 10:00:30.335272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.459 [2024-11-20 10:00:30.335641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.459 [2024-11-20 10:00:30.335670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.459 [2024-11-20 10:00:30.335686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.459 [2024-11-20 10:00:30.335927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.459 [2024-11-20 10:00:30.336122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.459 [2024-11-20 10:00:30.336141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.459 [2024-11-20 10:00:30.336152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.460 [2024-11-20 10:00:30.336163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.460 [2024-11-20 10:00:30.348520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.460 [2024-11-20 10:00:30.348953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.460 [2024-11-20 10:00:30.348999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.460 [2024-11-20 10:00:30.349016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.460 [2024-11-20 10:00:30.349253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.460 [2024-11-20 10:00:30.349515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.460 [2024-11-20 10:00:30.349538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.460 [2024-11-20 10:00:30.349552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.460 [2024-11-20 10:00:30.349565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.460 [2024-11-20 10:00:30.361884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.460 [2024-11-20 10:00:30.362299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.460 [2024-11-20 10:00:30.362334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.460 [2024-11-20 10:00:30.362364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.460 [2024-11-20 10:00:30.362594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.460 [2024-11-20 10:00:30.362803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.460 [2024-11-20 10:00:30.362822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.460 [2024-11-20 10:00:30.362834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.460 [2024-11-20 10:00:30.362845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.375617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.376010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.376053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.719 [2024-11-20 10:00:30.376069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.719 [2024-11-20 10:00:30.376338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.719 [2024-11-20 10:00:30.376544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.719 [2024-11-20 10:00:30.376564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.719 [2024-11-20 10:00:30.376577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.719 [2024-11-20 10:00:30.376589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.388875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.389187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.389214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.719 [2024-11-20 10:00:30.389229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.719 [2024-11-20 10:00:30.389497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.719 [2024-11-20 10:00:30.389751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.719 [2024-11-20 10:00:30.389772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.719 [2024-11-20 10:00:30.389784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.719 [2024-11-20 10:00:30.389796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.402237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.402616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.402660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.719 [2024-11-20 10:00:30.402676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.719 [2024-11-20 10:00:30.402929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.719 [2024-11-20 10:00:30.403150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.719 [2024-11-20 10:00:30.403169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.719 [2024-11-20 10:00:30.403181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.719 [2024-11-20 10:00:30.403192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.415747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.416165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.416194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.719 [2024-11-20 10:00:30.416210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.719 [2024-11-20 10:00:30.416434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.719 [2024-11-20 10:00:30.416680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.719 [2024-11-20 10:00:30.416700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.719 [2024-11-20 10:00:30.416712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.719 [2024-11-20 10:00:30.416723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.429078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.429524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.429552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.719 [2024-11-20 10:00:30.429568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.719 [2024-11-20 10:00:30.429806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.719 [2024-11-20 10:00:30.430051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.719 [2024-11-20 10:00:30.430076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.719 [2024-11-20 10:00:30.430090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.719 [2024-11-20 10:00:30.430102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.442520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.442896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.442923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.719 [2024-11-20 10:00:30.442938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.719 [2024-11-20 10:00:30.443153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.719 [2024-11-20 10:00:30.443421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.719 [2024-11-20 10:00:30.443444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.719 [2024-11-20 10:00:30.443458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.719 [2024-11-20 10:00:30.443471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.455807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.456211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.456239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.719 [2024-11-20 10:00:30.456255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.719 [2024-11-20 10:00:30.456482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.719 [2024-11-20 10:00:30.456715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.719 [2024-11-20 10:00:30.456734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.719 [2024-11-20 10:00:30.456746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.719 [2024-11-20 10:00:30.456757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.469236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.469680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.469708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.719 [2024-11-20 10:00:30.469738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.719 [2024-11-20 10:00:30.469973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.719 [2024-11-20 10:00:30.470167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.719 [2024-11-20 10:00:30.470186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.719 [2024-11-20 10:00:30.470197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.719 [2024-11-20 10:00:30.470209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.719 [2024-11-20 10:00:30.482520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.719 [2024-11-20 10:00:30.482972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.719 [2024-11-20 10:00:30.483015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.483031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.483273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.483541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.483564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.483578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.483590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.495860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.496163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.496204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.496219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.496486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.496724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.496743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.496755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.496766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.509001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.509397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.509426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.509442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.509657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.509879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.509899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.509912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.509924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.522668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.523049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.523081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.523097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.523330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.523559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.523580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.523607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.523619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.536236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.536627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.536671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.536686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.536905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.537115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.537134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.537145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.537156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.549727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.550113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.550155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.550171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.550424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.550665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.550684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.550696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.550707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.563010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.563374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.563402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.563418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.563658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.563868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.563887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.563899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.563910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.576095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.576492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.576534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.576549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.576771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.576982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.577001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.577013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.577024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.589343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.589680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.589707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.589723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.589945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.590157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.590176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.590188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.590200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.602535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.720 [2024-11-20 10:00:30.603041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.720 [2024-11-20 10:00:30.603083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.720 [2024-11-20 10:00:30.603100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.720 [2024-11-20 10:00:30.603360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.720 [2024-11-20 10:00:30.603561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.720 [2024-11-20 10:00:30.603581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.720 [2024-11-20 10:00:30.603613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.720 [2024-11-20 10:00:30.603626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.720 [2024-11-20 10:00:30.615675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.721 [2024-11-20 10:00:30.616082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.721 [2024-11-20 10:00:30.616134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.721 [2024-11-20 10:00:30.616149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.721 [2024-11-20 10:00:30.616411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.721 [2024-11-20 10:00:30.616632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.721 [2024-11-20 10:00:30.616652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.721 [2024-11-20 10:00:30.616664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.721 [2024-11-20 10:00:30.616676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.721 [2024-11-20 10:00:30.629385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.721 [2024-11-20 10:00:30.629778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.721 [2024-11-20 10:00:30.629806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.721 [2024-11-20 10:00:30.629822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.721 [2024-11-20 10:00:30.630036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.979 [2024-11-20 10:00:30.630339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.979 [2024-11-20 10:00:30.630361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.979 [2024-11-20 10:00:30.630390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.979 [2024-11-20 10:00:30.630403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.979 [2024-11-20 10:00:30.642590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.979 [2024-11-20 10:00:30.642966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.979 [2024-11-20 10:00:30.642995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.979 [2024-11-20 10:00:30.643010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.979 [2024-11-20 10:00:30.643251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.979 [2024-11-20 10:00:30.643494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.979 [2024-11-20 10:00:30.643515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.979 [2024-11-20 10:00:30.643528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.979 [2024-11-20 10:00:30.643540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.979 [2024-11-20 10:00:30.655718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.979 [2024-11-20 10:00:30.656082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.979 [2024-11-20 10:00:30.656124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.979 [2024-11-20 10:00:30.656140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.979 [2024-11-20 10:00:30.656397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.979 [2024-11-20 10:00:30.656613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.979 [2024-11-20 10:00:30.656632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.979 [2024-11-20 10:00:30.656644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.979 [2024-11-20 10:00:30.656655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.979 [2024-11-20 10:00:30.669000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.979 [2024-11-20 10:00:30.669413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.979 [2024-11-20 10:00:30.669455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.979 [2024-11-20 10:00:30.669472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.979 [2024-11-20 10:00:30.669724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.979 [2024-11-20 10:00:30.669918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.979 [2024-11-20 10:00:30.669938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.979 [2024-11-20 10:00:30.669950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.669961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.682197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.682526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.682553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.682568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.682769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.682994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.683013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.683027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.683038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.695430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.695814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.695860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.695876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.696124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.696344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.696365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.696392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.696405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.708532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.708907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.708950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.708965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.709218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.709476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.709497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.709510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.709522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.721664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.722090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.722117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.722148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.722389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.722602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.722638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.722651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.722663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.734809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.735138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.735165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.735180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.735415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.735638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.735659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.735685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.735697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.747990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.748420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.748449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.748464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.748706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.748916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.748935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.748947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.748959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.761206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.761605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.761634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.761650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.761906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.762126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.762147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.762159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.762171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.774468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.774976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.775018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.775034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.775286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.775514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.775534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.775552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.775565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.787621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.787996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.788039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.788054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.788313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.788542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.788564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.980 [2024-11-20 10:00:30.788591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.980 [2024-11-20 10:00:30.788604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.980 [2024-11-20 10:00:30.800800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.980 [2024-11-20 10:00:30.801292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.980 [2024-11-20 10:00:30.801342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.980 [2024-11-20 10:00:30.801358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.980 [2024-11-20 10:00:30.801598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.980 [2024-11-20 10:00:30.801807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.980 [2024-11-20 10:00:30.801826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.981 [2024-11-20 10:00:30.801838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.981 [2024-11-20 10:00:30.801849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.981 [2024-11-20 10:00:30.813946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.981 [2024-11-20 10:00:30.814315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.981 [2024-11-20 10:00:30.814358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.981 [2024-11-20 10:00:30.814374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.981 [2024-11-20 10:00:30.814609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.981 [2024-11-20 10:00:30.814818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.981 [2024-11-20 10:00:30.814837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.981 [2024-11-20 10:00:30.814849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.981 [2024-11-20 10:00:30.814861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.981 [2024-11-20 10:00:30.827005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.981 [2024-11-20 10:00:30.827379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.981 [2024-11-20 10:00:30.827423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.981 [2024-11-20 10:00:30.827438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.981 [2024-11-20 10:00:30.827692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.981 [2024-11-20 10:00:30.827901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.981 [2024-11-20 10:00:30.827920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.981 [2024-11-20 10:00:30.827931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.981 [2024-11-20 10:00:30.827942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.981 [2024-11-20 10:00:30.840024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.981 [2024-11-20 10:00:30.840327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.981 [2024-11-20 10:00:30.840353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.981 [2024-11-20 10:00:30.840368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.981 [2024-11-20 10:00:30.840562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.981 [2024-11-20 10:00:30.840771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.981 [2024-11-20 10:00:30.840791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.981 [2024-11-20 10:00:30.840802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.981 [2024-11-20 10:00:30.840813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.981 [2024-11-20 10:00:30.853141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.981 [2024-11-20 10:00:30.853576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.981 [2024-11-20 10:00:30.853619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.981 [2024-11-20 10:00:30.853635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.981 [2024-11-20 10:00:30.853872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.981 [2024-11-20 10:00:30.854082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.981 [2024-11-20 10:00:30.854102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.981 [2024-11-20 10:00:30.854113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.981 [2024-11-20 10:00:30.854124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.981 [2024-11-20 10:00:30.866378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.981 [2024-11-20 10:00:30.866780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.981 [2024-11-20 10:00:30.866813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.981 [2024-11-20 10:00:30.866829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.981 [2024-11-20 10:00:30.867044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.981 [2024-11-20 10:00:30.867256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.981 [2024-11-20 10:00:30.867275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.981 [2024-11-20 10:00:30.867311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.981 [2024-11-20 10:00:30.867326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.981 [2024-11-20 10:00:30.879454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.981 [2024-11-20 10:00:30.879816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.981 [2024-11-20 10:00:30.879843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:53.981 [2024-11-20 10:00:30.879859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:53.981 [2024-11-20 10:00:30.880094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:53.981 [2024-11-20 10:00:30.880331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.981 [2024-11-20 10:00:30.880367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.981 [2024-11-20 10:00:30.880379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.981 [2024-11-20 10:00:30.880392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.240 [2024-11-20 10:00:30.893013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.240 [2024-11-20 10:00:30.893375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.240 [2024-11-20 10:00:30.893417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.240 [2024-11-20 10:00:30.893432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.240 [2024-11-20 10:00:30.893673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.240 [2024-11-20 10:00:30.893867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.240 [2024-11-20 10:00:30.893886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.240 [2024-11-20 10:00:30.893898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.240 [2024-11-20 10:00:30.893909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.240 [2024-11-20 10:00:30.906206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.240 [2024-11-20 10:00:30.906602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.240 [2024-11-20 10:00:30.906631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.240 [2024-11-20 10:00:30.906646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.240 [2024-11-20 10:00:30.906885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.240 [2024-11-20 10:00:30.907081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:30.907099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:30.907111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:30.907122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:30.919630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:30.920048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:30.920076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:30.920092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:30.920330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:30.920577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:30.920598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:30.920611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:30.920624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:30.933018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:30.933425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:30.933454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:30.933469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:30.933698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:30.933914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:30.933934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:30.933946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:30.933957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:30.946252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:30.946686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:30.946713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:30.946727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:30.946963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:30.947177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:30.947196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:30.947214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:30.947226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:30.959595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:30.960066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:30.960094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:30.960110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:30.960364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:30.960571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:30.960591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:30.960604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:30.960616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:30.972994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:30.973402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:30.973430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:30.973447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:30.973691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:30.973906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:30.973927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:30.973939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:30.973951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:30.986456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:30.986852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:30.986896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:30.986912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:30.987180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:30.987410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:30.987438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:30.987450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:30.987462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:30.999876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:31.000265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:31.000317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:31.000337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:31.000567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:31.000799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:31.000819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:31.000831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:31.000843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:31.013161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:31.013541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:31.013569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:31.013586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:31.013816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:31.014058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:31.014079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:31.014092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:31.014105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:31.026626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:31.026968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:31.026995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:31.027010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.241 [2024-11-20 10:00:31.027211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.241 [2024-11-20 10:00:31.027481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.241 [2024-11-20 10:00:31.027504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.241 [2024-11-20 10:00:31.027517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.241 [2024-11-20 10:00:31.027530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.241 [2024-11-20 10:00:31.039943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.241 [2024-11-20 10:00:31.040380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.241 [2024-11-20 10:00:31.040414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.241 [2024-11-20 10:00:31.040431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.040664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.040881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.040901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.040914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.040925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.242 [2024-11-20 10:00:31.053219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.242 [2024-11-20 10:00:31.053565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.242 [2024-11-20 10:00:31.053593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.242 [2024-11-20 10:00:31.053608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.053830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.054046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.054066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.054079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.054091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.242 [2024-11-20 10:00:31.066527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.242 [2024-11-20 10:00:31.066913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.242 [2024-11-20 10:00:31.066941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.242 [2024-11-20 10:00:31.066957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.067192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.067439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.067461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.067474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.067487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.242 [2024-11-20 10:00:31.079734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.242 [2024-11-20 10:00:31.080105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.242 [2024-11-20 10:00:31.080133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.242 [2024-11-20 10:00:31.080149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.080402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.080631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.080651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.080679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.080691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.242 [2024-11-20 10:00:31.092954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.242 [2024-11-20 10:00:31.093315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.242 [2024-11-20 10:00:31.093344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.242 [2024-11-20 10:00:31.093360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.093590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.093810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.093829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.093841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.093853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.242 [2024-11-20 10:00:31.106256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.242 [2024-11-20 10:00:31.106653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.242 [2024-11-20 10:00:31.106695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.242 [2024-11-20 10:00:31.106710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.106945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.107145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.107165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.107177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.107189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.242 [2024-11-20 10:00:31.119609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.242 [2024-11-20 10:00:31.119919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.242 [2024-11-20 10:00:31.119961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.242 [2024-11-20 10:00:31.119976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.120198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.120461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.120483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.120502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.120515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.242 [2024-11-20 10:00:31.132864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.242 [2024-11-20 10:00:31.133235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.242 [2024-11-20 10:00:31.133263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.242 [2024-11-20 10:00:31.133279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.133519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.133757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.133776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.133789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.133800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.242 [2024-11-20 10:00:31.146192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.242 [2024-11-20 10:00:31.146631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.242 [2024-11-20 10:00:31.146661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.242 [2024-11-20 10:00:31.146677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.242 [2024-11-20 10:00:31.146920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.242 [2024-11-20 10:00:31.147121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.242 [2024-11-20 10:00:31.147140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.242 [2024-11-20 10:00:31.147152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.242 [2024-11-20 10:00:31.147164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.503 [2024-11-20 10:00:31.159795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.503 [2024-11-20 10:00:31.160230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.503 [2024-11-20 10:00:31.160258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.503 [2024-11-20 10:00:31.160274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.503 [2024-11-20 10:00:31.160513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.503 [2024-11-20 10:00:31.160752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.503 [2024-11-20 10:00:31.160771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.503 [2024-11-20 10:00:31.160783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.503 [2024-11-20 10:00:31.160795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.503 [2024-11-20 10:00:31.173097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.503 [2024-11-20 10:00:31.173438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.503 [2024-11-20 10:00:31.173466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.503 [2024-11-20 10:00:31.173481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.503 [2024-11-20 10:00:31.173703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.503 [2024-11-20 10:00:31.173903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.503 [2024-11-20 10:00:31.173923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.503 [2024-11-20 10:00:31.173935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.503 [2024-11-20 10:00:31.173947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.503 [2024-11-20 10:00:31.186383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.503 [2024-11-20 10:00:31.186760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.503 [2024-11-20 10:00:31.186788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.503 [2024-11-20 10:00:31.186804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.503 [2024-11-20 10:00:31.187035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.503 [2024-11-20 10:00:31.187249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.503 [2024-11-20 10:00:31.187269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.503 [2024-11-20 10:00:31.187295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.503 [2024-11-20 10:00:31.187319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.503 [2024-11-20 10:00:31.199727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.503 [2024-11-20 10:00:31.200098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.503 [2024-11-20 10:00:31.200140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.503 [2024-11-20 10:00:31.200156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.503 [2024-11-20 10:00:31.200420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.503 [2024-11-20 10:00:31.200667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.503 [2024-11-20 10:00:31.200686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.503 [2024-11-20 10:00:31.200699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.503 [2024-11-20 10:00:31.200710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.503 [2024-11-20 10:00:31.213020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.503 [2024-11-20 10:00:31.213436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.503 [2024-11-20 10:00:31.213469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.503 [2024-11-20 10:00:31.213486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.503 [2024-11-20 10:00:31.213714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.503 [2024-11-20 10:00:31.213930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.503 [2024-11-20 10:00:31.213950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.503 [2024-11-20 10:00:31.213962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.503 [2024-11-20 10:00:31.213973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.503 [2024-11-20 10:00:31.226291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.503 [2024-11-20 10:00:31.226738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.503 [2024-11-20 10:00:31.226764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.503 [2024-11-20 10:00:31.226779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.503 [2024-11-20 10:00:31.227028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.503 [2024-11-20 10:00:31.227229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.503 [2024-11-20 10:00:31.227249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.503 [2024-11-20 10:00:31.227261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.503 [2024-11-20 10:00:31.227272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.503 5442.75 IOPS, 21.26 MiB/s [2024-11-20T09:00:31.417Z] [2024-11-20 10:00:31.240233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.503 [2024-11-20 10:00:31.240635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.503 [2024-11-20 10:00:31.240679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.503 [2024-11-20 10:00:31.240695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.503 [2024-11-20 10:00:31.240950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.503 [2024-11-20 10:00:31.241164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.503 [2024-11-20 10:00:31.241184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.503 [2024-11-20 10:00:31.241197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.503 [2024-11-20 10:00:31.241208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.503 [2024-11-20 10:00:31.253673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.503 [2024-11-20 10:00:31.254115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.503 [2024-11-20 10:00:31.254157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.503 [2024-11-20 10:00:31.254172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.254441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.254663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.254683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.254695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.254706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.266905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.267284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.267322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.267340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.267571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.267810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.267847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.267860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.267873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.280175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.280564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.280593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.280609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.280852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.281059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.281079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.281092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.281104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.293453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.293881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.293923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.293940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.294168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.294412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.294439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.294452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.294464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.306777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.307168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.307209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.307225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.307477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.307697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.307717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.307729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.307741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.320118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.320460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.320488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.320504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.320737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.320937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.320956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.320969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.320980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.333554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.333929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.333957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.333973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.334215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.334459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.334480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.334493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.334505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.346822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.347131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.347173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.347188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.347440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.347677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.347697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.347710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.347721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.360102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.360518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.360546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.360562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.360791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.361006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.361027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.361039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.361050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.373428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.504 [2024-11-20 10:00:31.373818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.504 [2024-11-20 10:00:31.373860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.504 [2024-11-20 10:00:31.373875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.504 [2024-11-20 10:00:31.374133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.504 [2024-11-20 10:00:31.374391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.504 [2024-11-20 10:00:31.374414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.504 [2024-11-20 10:00:31.374427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.504 [2024-11-20 10:00:31.374439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.504 [2024-11-20 10:00:31.386820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.505 [2024-11-20 10:00:31.387195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.505 [2024-11-20 10:00:31.387228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.505 [2024-11-20 10:00:31.387246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.505 [2024-11-20 10:00:31.387497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.505 [2024-11-20 10:00:31.387715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.505 [2024-11-20 10:00:31.387735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.505 [2024-11-20 10:00:31.387747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.505 [2024-11-20 10:00:31.387759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.505 [2024-11-20 10:00:31.400108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.505 [2024-11-20 10:00:31.400507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.505 [2024-11-20 10:00:31.400536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.505 [2024-11-20 10:00:31.400552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.505 [2024-11-20 10:00:31.400791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.505 [2024-11-20 10:00:31.400991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.505 [2024-11-20 10:00:31.401010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.505 [2024-11-20 10:00:31.401022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.505 [2024-11-20 10:00:31.401034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.765 [2024-11-20 10:00:31.413763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.765 [2024-11-20 10:00:31.414128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-11-20 10:00:31.414157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.765 [2024-11-20 10:00:31.414172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.765 [2024-11-20 10:00:31.414397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.765 [2024-11-20 10:00:31.414631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.765 [2024-11-20 10:00:31.414666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.765 [2024-11-20 10:00:31.414679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.765 [2024-11-20 10:00:31.414691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.765 [2024-11-20 10:00:31.426992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.765 [2024-11-20 10:00:31.427368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-11-20 10:00:31.427396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.765 [2024-11-20 10:00:31.427412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.765 [2024-11-20 10:00:31.427646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.765 [2024-11-20 10:00:31.427861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.765 [2024-11-20 10:00:31.427881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.765 [2024-11-20 10:00:31.427893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.765 [2024-11-20 10:00:31.427905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.765 [2024-11-20 10:00:31.440346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.765 [2024-11-20 10:00:31.440734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-11-20 10:00:31.440776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.765 [2024-11-20 10:00:31.440792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.765 [2024-11-20 10:00:31.441022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.765 [2024-11-20 10:00:31.441237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.765 [2024-11-20 10:00:31.441256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.765 [2024-11-20 10:00:31.441268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.765 [2024-11-20 10:00:31.441280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.765 [2024-11-20 10:00:31.453584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.765 [2024-11-20 10:00:31.454034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-11-20 10:00:31.454077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.454093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.454375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.454589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.454624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.454637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.454648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.466886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.467205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.467248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.467264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.467503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.467741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.467765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.467779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.467790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.480080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.480458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.480486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.480502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.480748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.480948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.480968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.480980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.480992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.493442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.493808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.493836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.493852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.494083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.494324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.494345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.494374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.494386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.506774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.507145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.507172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.507188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.507441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.507667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.507687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.507699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.507712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.520085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.520460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.520489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.520505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.520735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.520958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.520979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.520992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.521004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.533407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.533826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.533854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.533870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.534112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.534339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.534360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.534373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.534385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.546628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.546941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.546968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.546983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.547183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.547430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.547452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.547466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.547478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.559907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.560279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.560334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.560351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.560619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.560819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.560839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.560851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.560863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.573201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.573619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-11-20 10:00:31.573663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.766 [2024-11-20 10:00:31.573678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.766 [2024-11-20 10:00:31.573924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.766 [2024-11-20 10:00:31.574140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.766 [2024-11-20 10:00:31.574159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.766 [2024-11-20 10:00:31.574171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.766 [2024-11-20 10:00:31.574183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.766 [2024-11-20 10:00:31.586475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.766 [2024-11-20 10:00:31.586930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-11-20 10:00:31.586958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.767 [2024-11-20 10:00:31.586974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.767 [2024-11-20 10:00:31.587216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.767 [2024-11-20 10:00:31.587480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.767 [2024-11-20 10:00:31.587502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.767 [2024-11-20 10:00:31.587515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.767 [2024-11-20 10:00:31.587527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.767 [2024-11-20 10:00:31.599790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.767 [2024-11-20 10:00:31.600124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-11-20 10:00:31.600151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.767 [2024-11-20 10:00:31.600166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.767 [2024-11-20 10:00:31.600423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.767 [2024-11-20 10:00:31.600651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.767 [2024-11-20 10:00:31.600671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.767 [2024-11-20 10:00:31.600683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.767 [2024-11-20 10:00:31.600695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.767 [2024-11-20 10:00:31.613097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.767 [2024-11-20 10:00:31.613544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-11-20 10:00:31.613572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.767 [2024-11-20 10:00:31.613603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.767 [2024-11-20 10:00:31.613838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.767 [2024-11-20 10:00:31.614038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.767 [2024-11-20 10:00:31.614058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.767 [2024-11-20 10:00:31.614070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.767 [2024-11-20 10:00:31.614082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.767 [2024-11-20 10:00:31.626345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.767 [2024-11-20 10:00:31.626782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-11-20 10:00:31.626809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.767 [2024-11-20 10:00:31.626825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.767 [2024-11-20 10:00:31.627067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.767 [2024-11-20 10:00:31.627267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.767 [2024-11-20 10:00:31.627309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.767 [2024-11-20 10:00:31.627325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.767 [2024-11-20 10:00:31.627338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.767 [2024-11-20 10:00:31.639751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.767 [2024-11-20 10:00:31.640143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-11-20 10:00:31.640186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.767 [2024-11-20 10:00:31.640202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.767 [2024-11-20 10:00:31.640428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.767 [2024-11-20 10:00:31.640662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.767 [2024-11-20 10:00:31.640702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.767 [2024-11-20 10:00:31.640715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.767 [2024-11-20 10:00:31.640727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.767 [2024-11-20 10:00:31.653046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.767 [2024-11-20 10:00:31.653436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-11-20 10:00:31.653464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.767 [2024-11-20 10:00:31.653480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.767 [2024-11-20 10:00:31.653723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.767 [2024-11-20 10:00:31.653924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.767 [2024-11-20 10:00:31.653943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.767 [2024-11-20 10:00:31.653955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.767 [2024-11-20 10:00:31.653967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.767 [2024-11-20 10:00:31.666331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.767 [2024-11-20 10:00:31.666732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-11-20 10:00:31.666760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:54.767 [2024-11-20 10:00:31.666776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:54.767 [2024-11-20 10:00:31.667008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:54.767 [2024-11-20 10:00:31.667240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.767 [2024-11-20 10:00:31.667260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.767 [2024-11-20 10:00:31.667272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.767 [2024-11-20 10:00:31.667299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 10:00:31.679821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 10:00:31.680224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 10:00:31.680250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 10:00:31.680264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.027 [2024-11-20 10:00:31.680529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.027 [2024-11-20 10:00:31.680751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 10:00:31.680771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 10:00:31.680783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 10:00:31.680795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 10:00:31.693033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 10:00:31.693407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 10:00:31.693435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 10:00:31.693451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.027 [2024-11-20 10:00:31.693680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.027 [2024-11-20 10:00:31.693899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 10:00:31.693919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 10:00:31.693931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 10:00:31.693943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 10:00:31.706328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 10:00:31.706652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 10:00:31.706680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 10:00:31.706695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.027 [2024-11-20 10:00:31.706902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.027 [2024-11-20 10:00:31.707118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 10:00:31.707137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 10:00:31.707149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 10:00:31.707161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 10:00:31.719631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 10:00:31.720036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 10:00:31.720079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 10:00:31.720095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.027 [2024-11-20 10:00:31.720338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.027 [2024-11-20 10:00:31.720551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 10:00:31.720572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 10:00:31.720585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 10:00:31.720597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 10:00:31.732879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 10:00:31.733249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 10:00:31.733297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 10:00:31.733323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.027 [2024-11-20 10:00:31.733553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.027 [2024-11-20 10:00:31.733789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 10:00:31.733809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 10:00:31.733821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 10:00:31.733832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 10:00:31.746078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 10:00:31.746509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 10:00:31.746537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 10:00:31.746553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.027 [2024-11-20 10:00:31.746783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.027 [2024-11-20 10:00:31.746999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 10:00:31.747018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 10:00:31.747031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 10:00:31.747042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 10:00:31.759417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 10:00:31.759751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 10:00:31.759775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 10:00:31.759805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.027 [2024-11-20 10:00:31.760013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.027 [2024-11-20 10:00:31.760232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 10:00:31.760252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 10:00:31.760264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 10:00:31.760276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.772744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.773117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.773145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.773161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.773404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.773619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.773640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.773652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.773665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.786058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.786398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.786428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.786444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.786674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.786892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.786912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.786924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.786936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.799436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.799860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.799888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.799904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.800147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.800393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.800414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.800428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.800440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.812751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.813083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.813110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.813126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.813363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.813593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.813633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.813647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.813659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.826044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.826439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.826469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.826485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.826714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.826948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.826968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.826980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.826991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.839380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.839820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.839848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.839864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.840099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.840335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.840355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.840368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.840379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.852600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.853081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.853123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.853140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.853424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.853651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.853672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.853685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.853697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.865578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.865984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.866049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.866065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.866300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.866507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.866527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.866539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.866551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.878827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.879336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.879364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.879379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.879628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.879837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.879856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.879867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 10:00:31.879878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.028 [2024-11-20 10:00:31.891968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.028 [2024-11-20 10:00:31.892495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-11-20 10:00:31.892523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.028 [2024-11-20 10:00:31.892538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.028 [2024-11-20 10:00:31.892797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.028 [2024-11-20 10:00:31.892991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.028 [2024-11-20 10:00:31.893010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.028 [2024-11-20 10:00:31.893022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.029 [2024-11-20 10:00:31.893033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.029 [2024-11-20 10:00:31.905134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.029 [2024-11-20 10:00:31.905491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-11-20 10:00:31.905525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.029 [2024-11-20 10:00:31.905541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.029 [2024-11-20 10:00:31.905794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.029 [2024-11-20 10:00:31.905989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.029 [2024-11-20 10:00:31.906007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.029 [2024-11-20 10:00:31.906019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.029 [2024-11-20 10:00:31.906030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.029 [2024-11-20 10:00:31.918280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.029 [2024-11-20 10:00:31.918611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-11-20 10:00:31.918653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.029 [2024-11-20 10:00:31.918668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.029 [2024-11-20 10:00:31.918899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.029 [2024-11-20 10:00:31.919110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.029 [2024-11-20 10:00:31.919129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.029 [2024-11-20 10:00:31.919141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.029 [2024-11-20 10:00:31.919152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.029 [2024-11-20 10:00:31.931328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.029 [2024-11-20 10:00:31.931640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-11-20 10:00:31.931681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.029 [2024-11-20 10:00:31.931695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.029 [2024-11-20 10:00:31.931911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.029 [2024-11-20 10:00:31.932120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.029 [2024-11-20 10:00:31.932140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.029 [2024-11-20 10:00:31.932152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.029 [2024-11-20 10:00:31.932163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 10:00:31.944807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 10:00:31.945237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 10:00:31.945278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 10:00:31.945294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.288 [2024-11-20 10:00:31.945541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.288 [2024-11-20 10:00:31.945770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 10:00:31.945790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 10:00:31.945801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 10:00:31.945812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 10:00:31.957822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 10:00:31.958203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 10:00:31.958243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 10:00:31.958258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.288 [2024-11-20 10:00:31.958517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.288 [2024-11-20 10:00:31.958748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 10:00:31.958768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 10:00:31.958780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 10:00:31.958791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 10:00:31.970877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 10:00:31.971213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 10:00:31.971240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 10:00:31.971255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.288 [2024-11-20 10:00:31.971521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.288 [2024-11-20 10:00:31.971736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 10:00:31.971755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 10:00:31.971768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 10:00:31.971779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 10:00:31.984108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 10:00:31.984510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 10:00:31.984539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 10:00:31.984555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.288 [2024-11-20 10:00:31.984807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.288 [2024-11-20 10:00:31.985001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 10:00:31.985020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 10:00:31.985037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 10:00:31.985049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 10:00:31.997275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 10:00:31.997655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 10:00:31.997697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 10:00:31.997712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.288 [2024-11-20 10:00:31.997961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.288 [2024-11-20 10:00:31.998169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 10:00:31.998188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 10:00:31.998200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 10:00:31.998211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 10:00:32.010478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 10:00:32.010883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 10:00:32.010910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 10:00:32.010925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.288 [2024-11-20 10:00:32.011140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.288 [2024-11-20 10:00:32.011387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 10:00:32.011408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 10:00:32.011421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 10:00:32.011432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 10:00:32.023648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 10:00:32.024014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 10:00:32.024043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 10:00:32.024059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.288 [2024-11-20 10:00:32.024288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.288 [2024-11-20 10:00:32.024539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 10:00:32.024561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 10:00:32.024574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 10:00:32.024586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 10:00:32.036972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 10:00:32.037367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 10:00:32.037396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 10:00:32.037411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.288 [2024-11-20 10:00:32.037641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.288 [2024-11-20 10:00:32.037857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 10:00:32.037876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 10:00:32.037889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.037901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.050238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.050636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.050679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.050695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.050927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.051121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.051140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.051152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.051164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.063456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.063899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.063927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.063942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.064178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.064433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.064454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.064467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.064479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.076584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.077075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.077107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.077138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.077417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.077624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.077644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.077657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.077668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.089624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.089988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.090030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.090046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.090294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.090538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.090559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.090572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.090583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.102788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.103091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.103132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.103147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.103375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.103598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.103618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.103646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.103658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.115879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.116240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.116283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.116300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.116530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.116769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.116789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.116801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.116813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.129094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.129483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.129525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.129541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.129765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.129976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.129995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.130007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.130018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.142099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.142431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.142457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.142472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.142687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.142898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.142917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.142929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.142940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.155094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.155502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.155530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.155545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.155767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.155977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 10:00:32.155997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 10:00:32.156016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 10:00:32.156028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 [2024-11-20 10:00:32.168176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 10:00:32.168609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 10:00:32.168650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 10:00:32.168667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.289 [2024-11-20 10:00:32.168907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.289 [2024-11-20 10:00:32.169116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.290 [2024-11-20 10:00:32.169135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.290 [2024-11-20 10:00:32.169148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.290 [2024-11-20 10:00:32.169159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.290 [2024-11-20 10:00:32.181311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.290 [2024-11-20 10:00:32.181738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.290 [2024-11-20 10:00:32.181766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.290 [2024-11-20 10:00:32.181781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.290 [2024-11-20 10:00:32.182016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.290 [2024-11-20 10:00:32.182226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.290 [2024-11-20 10:00:32.182245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.290 [2024-11-20 10:00:32.182256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.290 [2024-11-20 10:00:32.182268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.290 [2024-11-20 10:00:32.194329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.290 [2024-11-20 10:00:32.194709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.290 [2024-11-20 10:00:32.194736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.290 [2024-11-20 10:00:32.194751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.290 [2024-11-20 10:00:32.194986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.290 [2024-11-20 10:00:32.195196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.290 [2024-11-20 10:00:32.195215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.290 [2024-11-20 10:00:32.195226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.290 [2024-11-20 10:00:32.195253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.549 [2024-11-20 10:00:32.207607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.549 [2024-11-20 10:00:32.208097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.549 [2024-11-20 10:00:32.208124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.549 [2024-11-20 10:00:32.208155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.549 [2024-11-20 10:00:32.208422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.549 [2024-11-20 10:00:32.208662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.549 [2024-11-20 10:00:32.208681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.549 [2024-11-20 10:00:32.208694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.549 [2024-11-20 10:00:32.208705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.549 [2024-11-20 10:00:32.220665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.549 [2024-11-20 10:00:32.221027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.549 [2024-11-20 10:00:32.221054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.549 [2024-11-20 10:00:32.221070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.549 [2024-11-20 10:00:32.221315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.549 [2024-11-20 10:00:32.221535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.549 [2024-11-20 10:00:32.221556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.549 [2024-11-20 10:00:32.221569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.549 [2024-11-20 10:00:32.221580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.549 [2024-11-20 10:00:32.233772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.549 [2024-11-20 10:00:32.234112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.549 [2024-11-20 10:00:32.234189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.549 [2024-11-20 10:00:32.234204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.549 [2024-11-20 10:00:32.234449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.549 [2024-11-20 10:00:32.234667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.549 [2024-11-20 10:00:32.234687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.549 [2024-11-20 10:00:32.234699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.549 [2024-11-20 10:00:32.234710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.549 4354.20 IOPS, 17.01 MiB/s [2024-11-20T09:00:32.463Z] [2024-11-20 10:00:32.246935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.549 [2024-11-20 10:00:32.247373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.549 [2024-11-20 10:00:32.247407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.549 [2024-11-20 10:00:32.247423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.549 [2024-11-20 10:00:32.247659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.549 [2024-11-20 10:00:32.247869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.549 [2024-11-20 10:00:32.247888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.549 [2024-11-20 10:00:32.247900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.549 [2024-11-20 10:00:32.247911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.549 [2024-11-20 10:00:32.260069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.549 [2024-11-20 10:00:32.260564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.549 [2024-11-20 10:00:32.260592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.549 [2024-11-20 10:00:32.260623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.549 [2024-11-20 10:00:32.260876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.549 [2024-11-20 10:00:32.261085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.549 [2024-11-20 10:00:32.261104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.549 [2024-11-20 10:00:32.261116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.549 [2024-11-20 10:00:32.261127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.549 [2024-11-20 10:00:32.273176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.549 [2024-11-20 10:00:32.273548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.549 [2024-11-20 10:00:32.273592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.549 [2024-11-20 10:00:32.273607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.549 [2024-11-20 10:00:32.273877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.549 [2024-11-20 10:00:32.274097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.549 [2024-11-20 10:00:32.274118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.549 [2024-11-20 10:00:32.274130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.549 [2024-11-20 10:00:32.274142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.549 [2024-11-20 10:00:32.286340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.549 [2024-11-20 10:00:32.286705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.549 [2024-11-20 10:00:32.286732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.549 [2024-11-20 10:00:32.286747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.549 [2024-11-20 10:00:32.286966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.549 [2024-11-20 10:00:32.287178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.549 [2024-11-20 10:00:32.287197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.549 [2024-11-20 10:00:32.287209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.549 [2024-11-20 10:00:32.287220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.549 [2024-11-20 10:00:32.299501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.549 [2024-11-20 10:00:32.299898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.549 [2024-11-20 10:00:32.299939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.549 [2024-11-20 10:00:32.299954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.549 [2024-11-20 10:00:32.300200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.549 [2024-11-20 10:00:32.300439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.549 [2024-11-20 10:00:32.300461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.549 [2024-11-20 10:00:32.300474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.549 [2024-11-20 10:00:32.300486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.312560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.312925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.312967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.312982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.313230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.313471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.313492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.313505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.313517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.325657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.326021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.326049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.326064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.326299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.326528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.326553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.326567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.326593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.338874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.339355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.339383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.339399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.339666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.339860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.339879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.339891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.339902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.351898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.352378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.352405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.352435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.352682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.352876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.352894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.352907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.352918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.365065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.365436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.365479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.365495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.365765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.365960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.365979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.365990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.366002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.378312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.378730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.378756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.378771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.378987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.379198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.379217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.379229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.379240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.391557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.392004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.392031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.392062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.392312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.392532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.392553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.392565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.392577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.404861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.405350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.405394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.405410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.405648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.405859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.405878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.405890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.405901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.418016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.418449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.418482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.418499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.418739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.418950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.418969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.418981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.550 [2024-11-20 10:00:32.418993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.550 [2024-11-20 10:00:32.431085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.550 [2024-11-20 10:00:32.431471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.550 [2024-11-20 10:00:32.431540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.550 [2024-11-20 10:00:32.431556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.550 [2024-11-20 10:00:32.431819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.550 [2024-11-20 10:00:32.432013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.550 [2024-11-20 10:00:32.432032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.550 [2024-11-20 10:00:32.432044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.551 [2024-11-20 10:00:32.432055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.551 [2024-11-20 10:00:32.444237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.551 [2024-11-20 10:00:32.444645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.551 [2024-11-20 10:00:32.444674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.551 [2024-11-20 10:00:32.444690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.551 [2024-11-20 10:00:32.444932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.551 [2024-11-20 10:00:32.445146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.551 [2024-11-20 10:00:32.445165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.551 [2024-11-20 10:00:32.445177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.551 [2024-11-20 10:00:32.445188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.551 [2024-11-20 10:00:32.457679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.551 [2024-11-20 10:00:32.458060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.551 [2024-11-20 10:00:32.458089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.551 [2024-11-20 10:00:32.458105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.551 [2024-11-20 10:00:32.458346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.551 [2024-11-20 10:00:32.458584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.551 [2024-11-20 10:00:32.458605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.551 [2024-11-20 10:00:32.458617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.551 [2024-11-20 10:00:32.458644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.810 [2024-11-20 10:00:32.470913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.810 [2024-11-20 10:00:32.471278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.810 [2024-11-20 10:00:32.471310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.810 [2024-11-20 10:00:32.471342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.810 [2024-11-20 10:00:32.471582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.810 [2024-11-20 10:00:32.471811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.810 [2024-11-20 10:00:32.471830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.810 [2024-11-20 10:00:32.471842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.810 [2024-11-20 10:00:32.471853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.810 [2024-11-20 10:00:32.484130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.810 [2024-11-20 10:00:32.484500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.810 [2024-11-20 10:00:32.484565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.810 [2024-11-20 10:00:32.484581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.810 [2024-11-20 10:00:32.484831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.810 [2024-11-20 10:00:32.485040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.810 [2024-11-20 10:00:32.485059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.810 [2024-11-20 10:00:32.485071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.810 [2024-11-20 10:00:32.485082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.810 [2024-11-20 10:00:32.497220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.810 [2024-11-20 10:00:32.497742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.810 [2024-11-20 10:00:32.497795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.810 [2024-11-20 10:00:32.497810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.810 [2024-11-20 10:00:32.498037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.810 [2024-11-20 10:00:32.498241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.810 [2024-11-20 10:00:32.498264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.810 [2024-11-20 10:00:32.498277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.810 [2024-11-20 10:00:32.498312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.810 [2024-11-20 10:00:32.510463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.810 [2024-11-20 10:00:32.510890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.810 [2024-11-20 10:00:32.510945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.810 [2024-11-20 10:00:32.510961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.810 [2024-11-20 10:00:32.511223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.810 [2024-11-20 10:00:32.511465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.810 [2024-11-20 10:00:32.511486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.810 [2024-11-20 10:00:32.511499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.810 [2024-11-20 10:00:32.511511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.810 [2024-11-20 10:00:32.523621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.810 [2024-11-20 10:00:32.523918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.810 [2024-11-20 10:00:32.523958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.810 [2024-11-20 10:00:32.523973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.810 [2024-11-20 10:00:32.524210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.810 [2024-11-20 10:00:32.524426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.810 [2024-11-20 10:00:32.524447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.810 [2024-11-20 10:00:32.524459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.810 [2024-11-20 10:00:32.524471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.810 [2024-11-20 10:00:32.536909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.810 [2024-11-20 10:00:32.537213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.810 [2024-11-20 10:00:32.537239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.810 [2024-11-20 10:00:32.537254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.810 [2024-11-20 10:00:32.537497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.810 [2024-11-20 10:00:32.537732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.810 [2024-11-20 10:00:32.537751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.537763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.537774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.550089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.550612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.550663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.550678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.550902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.551096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.551116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.551128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.551139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.563246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.563684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.563737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.563751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.563993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.564187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.564206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.564217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.564229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.576423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.576913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.576955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.576970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.577216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.577456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.577478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.577490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.577502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.589467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.589835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.589881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.589897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.590146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.590385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.590406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.590419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.590430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.602661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.603089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.603131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.603146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.603410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.603632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.603666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.603679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.603690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.615898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.616202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.616242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.616257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.616507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.616738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.616757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.616769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.616780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.628955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.629314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.629342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.629357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.629586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.629823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.629842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.629854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.629866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.642117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.642492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.642535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.642551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.642804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.643013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.643032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.643043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.643054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.655272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.655616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.655645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.655660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.655882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.656093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.811 [2024-11-20 10:00:32.656113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.811 [2024-11-20 10:00:32.656125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.811 [2024-11-20 10:00:32.656136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.811 [2024-11-20 10:00:32.668325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.811 [2024-11-20 10:00:32.668689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.811 [2024-11-20 10:00:32.668716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.811 [2024-11-20 10:00:32.668731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.811 [2024-11-20 10:00:32.668966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.811 [2024-11-20 10:00:32.669176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.812 [2024-11-20 10:00:32.669200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.812 [2024-11-20 10:00:32.669212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.812 [2024-11-20 10:00:32.669224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.812 [2024-11-20 10:00:32.681446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.812 [2024-11-20 10:00:32.681935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.812 [2024-11-20 10:00:32.681978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.812 [2024-11-20 10:00:32.681995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.812 [2024-11-20 10:00:32.682247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.812 [2024-11-20 10:00:32.682502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.812 [2024-11-20 10:00:32.682524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.812 [2024-11-20 10:00:32.682536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.812 [2024-11-20 10:00:32.682548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.812 [2024-11-20 10:00:32.694499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.812 [2024-11-20 10:00:32.694863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.812 [2024-11-20 10:00:32.694891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.812 [2024-11-20 10:00:32.694907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.812 [2024-11-20 10:00:32.695149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.812 [2024-11-20 10:00:32.695386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.812 [2024-11-20 10:00:32.695407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.812 [2024-11-20 10:00:32.695419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.812 [2024-11-20 10:00:32.695430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.812 [2024-11-20 10:00:32.707516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.812 [2024-11-20 10:00:32.707847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.812 [2024-11-20 10:00:32.707875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:55.812 [2024-11-20 10:00:32.707890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:55.812 [2024-11-20 10:00:32.708112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:55.812 [2024-11-20 10:00:32.708348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.812 [2024-11-20 10:00:32.708368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.812 [2024-11-20 10:00:32.708380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.812 [2024-11-20 10:00:32.708392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.812 [2024-11-20 10:00:32.721125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.074 [2024-11-20 10:00:32.721591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.074 [2024-11-20 10:00:32.721645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.074 [2024-11-20 10:00:32.721661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.074 [2024-11-20 10:00:32.721902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.074 [2024-11-20 10:00:32.722122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.074 [2024-11-20 10:00:32.722142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.074 [2024-11-20 10:00:32.722155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.074 [2024-11-20 10:00:32.722166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3846404 Killed "${NVMF_APP[@]}" "$@" 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.074 [2024-11-20 10:00:32.734508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.074 [2024-11-20 10:00:32.734889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.074 [2024-11-20 10:00:32.734916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.074 [2024-11-20 10:00:32.734932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.074 [2024-11-20 10:00:32.735152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.074 [2024-11-20 10:00:32.735403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.074 [2024-11-20 10:00:32.735425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.074 [2024-11-20 10:00:32.735439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.074 [2024-11-20 10:00:32.735452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3847359 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3847359 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3847359 ']' 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.074 10:00:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.074 [2024-11-20 10:00:32.747915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.074 [2024-11-20 10:00:32.748295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.074 [2024-11-20 10:00:32.748351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.074 [2024-11-20 10:00:32.748368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.074 [2024-11-20 10:00:32.748597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.074 [2024-11-20 10:00:32.748814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.074 [2024-11-20 10:00:32.748834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.074 [2024-11-20 10:00:32.748846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.074 [2024-11-20 10:00:32.748858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.074 [2024-11-20 10:00:32.761388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.074 [2024-11-20 10:00:32.761792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.074 [2024-11-20 10:00:32.761820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.074 [2024-11-20 10:00:32.761835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.074 [2024-11-20 10:00:32.762058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.074 [2024-11-20 10:00:32.762275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.074 [2024-11-20 10:00:32.762320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.074 [2024-11-20 10:00:32.762336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.074 [2024-11-20 10:00:32.762364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.074 [2024-11-20 10:00:32.774766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.074 [2024-11-20 10:00:32.775169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.074 [2024-11-20 10:00:32.775198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.074 [2024-11-20 10:00:32.775215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.074 [2024-11-20 10:00:32.775440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.775688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.775709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.775723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.775735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.787152] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:26:56.075 [2024-11-20 10:00:32.787227] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.075 [2024-11-20 10:00:32.788253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.788610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.788639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.788655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.788869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.789091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.789112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.789125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.789137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.801815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.802212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.802239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.802255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.802509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.802735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.802755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.802768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.802780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.815195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.815621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.815664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.815680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.815917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.816132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.816152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.816164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.816176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.828671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.829111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.829139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.829155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.829394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.829626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.829660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.829673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.829685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.842128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.842484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.842512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.842528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.842770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.842971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.842990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.843002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.843014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.855715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.856108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.856135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.856151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.856387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.856622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.856658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.856671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.856683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.865082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:56.075 [2024-11-20 10:00:32.869075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.869434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.869472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.869489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.869742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.869943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.869962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.869975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.869987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.882413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.882956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.882993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.883012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.883263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.883527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.883550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.883566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.883590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.075 [2024-11-20 10:00:32.895732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.075 [2024-11-20 10:00:32.896109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.075 [2024-11-20 10:00:32.896137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.075 [2024-11-20 10:00:32.896154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.075 [2024-11-20 10:00:32.896406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.075 [2024-11-20 10:00:32.896613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.075 [2024-11-20 10:00:32.896648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.075 [2024-11-20 10:00:32.896661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.075 [2024-11-20 10:00:32.896673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.076 [2024-11-20 10:00:32.909117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.076 [2024-11-20 10:00:32.909496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.076 [2024-11-20 10:00:32.909525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.076 [2024-11-20 10:00:32.909542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.076 [2024-11-20 10:00:32.909803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.076 [2024-11-20 10:00:32.910004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.076 [2024-11-20 10:00:32.910024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.076 [2024-11-20 10:00:32.910037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.076 [2024-11-20 10:00:32.910049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.076 [2024-11-20 10:00:32.922528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.076 [2024-11-20 10:00:32.922862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.076 [2024-11-20 10:00:32.922889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.076 [2024-11-20 10:00:32.922905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.076 [2024-11-20 10:00:32.923113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.076 [2024-11-20 10:00:32.923356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.076 [2024-11-20 10:00:32.923379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.076 [2024-11-20 10:00:32.923394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.076 [2024-11-20 10:00:32.923407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.076 [2024-11-20 10:00:32.924704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.076 [2024-11-20 10:00:32.924733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.076 [2024-11-20 10:00:32.924763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.076 [2024-11-20 10:00:32.924775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.076 [2024-11-20 10:00:32.924785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.076 [2024-11-20 10:00:32.926222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.076 [2024-11-20 10:00:32.926277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:56.076 [2024-11-20 10:00:32.926280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.076 [2024-11-20 10:00:32.936093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.076 [2024-11-20 10:00:32.936611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.076 [2024-11-20 10:00:32.936650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.076 [2024-11-20 10:00:32.936670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.076 [2024-11-20 10:00:32.936908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.076 [2024-11-20 10:00:32.937126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.076 [2024-11-20 10:00:32.937148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.076 [2024-11-20 10:00:32.937164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.076 [2024-11-20 10:00:32.937179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.076 [2024-11-20 10:00:32.949637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.076 [2024-11-20 10:00:32.950161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.076 [2024-11-20 10:00:32.950199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.076 [2024-11-20 10:00:32.950221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.076 [2024-11-20 10:00:32.950457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.076 [2024-11-20 10:00:32.950695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.076 [2024-11-20 10:00:32.950718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.076 [2024-11-20 10:00:32.950734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.076 [2024-11-20 10:00:32.950749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.076 [2024-11-20 10:00:32.963249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.076 [2024-11-20 10:00:32.963787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.076 [2024-11-20 10:00:32.963826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.076 [2024-11-20 10:00:32.963846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.076 [2024-11-20 10:00:32.964086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.076 [2024-11-20 10:00:32.964334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.076 [2024-11-20 10:00:32.964376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.076 [2024-11-20 10:00:32.964394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.076 [2024-11-20 10:00:32.964410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.076 [2024-11-20 10:00:32.976843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.076 [2024-11-20 10:00:32.977371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.076 [2024-11-20 10:00:32.977409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.076 [2024-11-20 10:00:32.977430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.076 [2024-11-20 10:00:32.977668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.076 [2024-11-20 10:00:32.977888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.076 [2024-11-20 10:00:32.977910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.076 [2024-11-20 10:00:32.977926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.076 [2024-11-20 10:00:32.977941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.396 [2024-11-20 10:00:32.990512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.396 [2024-11-20 10:00:32.990967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.396 [2024-11-20 10:00:32.991004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.396 [2024-11-20 10:00:32.991034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.396 [2024-11-20 10:00:32.991257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.396 [2024-11-20 10:00:32.991503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.396 [2024-11-20 10:00:32.991527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.396 [2024-11-20 10:00:32.991544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.396 [2024-11-20 10:00:32.991560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.396 [2024-11-20 10:00:33.004113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.396 [2024-11-20 10:00:33.004656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.396 [2024-11-20 10:00:33.004705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.396 [2024-11-20 10:00:33.004725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.396 [2024-11-20 10:00:33.004963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.396 [2024-11-20 10:00:33.005181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.396 [2024-11-20 10:00:33.005203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.396 [2024-11-20 10:00:33.005220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.396 [2024-11-20 10:00:33.005236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.396 [2024-11-20 10:00:33.017820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.396 [2024-11-20 10:00:33.018156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.396 [2024-11-20 10:00:33.018185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.396 [2024-11-20 10:00:33.018201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.396 [2024-11-20 10:00:33.018443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.396 [2024-11-20 10:00:33.018658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.396 [2024-11-20 10:00:33.018679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.396 [2024-11-20 10:00:33.018692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.396 [2024-11-20 10:00:33.018705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.396 [2024-11-20 10:00:33.031451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.396 [2024-11-20 10:00:33.031771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.396 [2024-11-20 10:00:33.031800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.397 [2024-11-20 10:00:33.031817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.397 [2024-11-20 10:00:33.032032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.397 [2024-11-20 10:00:33.032260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.397 [2024-11-20 10:00:33.032283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.397 [2024-11-20 10:00:33.032298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.397 [2024-11-20 10:00:33.032322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.397 [2024-11-20 10:00:33.045224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.397 [2024-11-20 10:00:33.045581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.397 [2024-11-20 10:00:33.045618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.397 [2024-11-20 10:00:33.045634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.397 [2024-11-20 10:00:33.045864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.397 [2024-11-20 10:00:33.046077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.397 [2024-11-20 10:00:33.046099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.397 [2024-11-20 10:00:33.046112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.397 [2024-11-20 10:00:33.046124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.397 [2024-11-20 10:00:33.058842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.397 [2024-11-20 10:00:33.059195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.397 [2024-11-20 10:00:33.059224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.397 [2024-11-20 10:00:33.059240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.397 [2024-11-20 10:00:33.059471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.397 [2024-11-20 10:00:33.059704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.397 [2024-11-20 10:00:33.059726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.397 [2024-11-20 10:00:33.059739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.397 [2024-11-20 10:00:33.059752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.397 [2024-11-20 10:00:33.066886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.397 [2024-11-20 10:00:33.072518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:56.397 [2024-11-20 10:00:33.072834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.397 [2024-11-20 10:00:33.072862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.397 [2024-11-20 10:00:33.072878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.397 [2024-11-20 10:00:33.073093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.397 [2024-11-20 10:00:33.073327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.397 [2024-11-20 10:00:33.073353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.397 [2024-11-20 10:00:33.073368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.397 [2024-11-20 10:00:33.073382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.397 [2024-11-20 10:00:33.085981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.397 [2024-11-20 10:00:33.086435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.397 [2024-11-20 10:00:33.086471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.397 [2024-11-20 10:00:33.086490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.397 [2024-11-20 10:00:33.086738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.397 [2024-11-20 10:00:33.086948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.397 [2024-11-20 10:00:33.086969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.397 [2024-11-20 10:00:33.086984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.397 [2024-11-20 10:00:33.086998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.397 [2024-11-20 10:00:33.099491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.397 [2024-11-20 10:00:33.099878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.397 [2024-11-20 10:00:33.099907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.397 [2024-11-20 10:00:33.099924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.397 [2024-11-20 10:00:33.100139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.397 [2024-11-20 10:00:33.100397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.397 [2024-11-20 10:00:33.100420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.397 [2024-11-20 10:00:33.100434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.397 [2024-11-20 10:00:33.100456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.397 Malloc0 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.397 [2024-11-20 10:00:33.113151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.397 [2024-11-20 10:00:33.113553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.397 [2024-11-20 10:00:33.113584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.397 [2024-11-20 10:00:33.113602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.397 [2024-11-20 10:00:33.113820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.397 [2024-11-20 10:00:33.114053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.397 [2024-11-20 10:00:33.114074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.397 [2024-11-20 10:00:33.114089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.397 [2024-11-20 10:00:33.114102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.397 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.397 [2024-11-20 10:00:33.126748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.397 [2024-11-20 10:00:33.127105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.397 [2024-11-20 10:00:33.127133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e8a40 with addr=10.0.0.2, port=4420 00:26:56.397 [2024-11-20 10:00:33.127149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e8a40 is same with the state(6) to be set 00:26:56.397 [2024-11-20 10:00:33.127376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e8a40 (9): Bad file descriptor 00:26:56.397 [2024-11-20 10:00:33.127611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:56.397 [2024-11-20 10:00:33.127632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:56.398 [2024-11-20 10:00:33.127645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:56.398 [2024-11-20 10:00:33.127658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:56.398 [2024-11-20 10:00:33.129570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.398 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.398 10:00:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3846600 00:26:56.398 [2024-11-20 10:00:33.140228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:56.398 [2024-11-20 10:00:33.210804] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:57.352 3659.00 IOPS, 14.29 MiB/s [2024-11-20T09:00:35.636Z] 4351.43 IOPS, 17.00 MiB/s [2024-11-20T09:00:36.568Z] 4888.38 IOPS, 19.10 MiB/s [2024-11-20T09:00:37.499Z] 5293.78 IOPS, 20.68 MiB/s [2024-11-20T09:00:38.432Z] 5625.20 IOPS, 21.97 MiB/s [2024-11-20T09:00:39.363Z] 5878.36 IOPS, 22.96 MiB/s [2024-11-20T09:00:40.295Z] 6108.00 IOPS, 23.86 MiB/s [2024-11-20T09:00:41.670Z] 6292.62 IOPS, 24.58 MiB/s [2024-11-20T09:00:42.608Z] 6456.21 IOPS, 25.22 MiB/s 00:27:05.694 Latency(us) 00:27:05.694 [2024-11-20T09:00:42.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.694 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:05.694 Verification LBA range: start 0x0 length 0x4000 00:27:05.694 Nvme1n1 : 15.00 6602.03 25.79 10057.29 0.00 7660.31 843.47 22622.06 00:27:05.694 [2024-11-20T09:00:42.608Z] =================================================================================================================== 00:27:05.694 [2024-11-20T09:00:42.608Z] Total : 6602.03 25.79 10057.29 0.00 7660.31 843.47 22622.06 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.694 rmmod nvme_tcp 00:27:05.694 rmmod nvme_fabrics 00:27:05.694 rmmod nvme_keyring 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3847359 ']' 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3847359 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3847359 ']' 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3847359 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3847359 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3847359' 00:27:05.694 killing process with pid 3847359 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3847359 00:27:05.694 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3847359 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.952 10:00:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.487 00:27:08.487 real 0m22.737s 00:27:08.487 user 0m59.791s 00:27:08.487 sys 0m4.737s 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.487 ************************************ 00:27:08.487 END TEST nvmf_bdevperf 00:27:08.487 ************************************ 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.487 ************************************ 00:27:08.487 START TEST nvmf_target_disconnect 00:27:08.487 ************************************ 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:08.487 * Looking for test storage... 00:27:08.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:08.487 10:00:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:08.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.487 --rc genhtml_branch_coverage=1 00:27:08.487 --rc genhtml_function_coverage=1 00:27:08.487 --rc genhtml_legend=1 00:27:08.487 --rc geninfo_all_blocks=1 00:27:08.487 --rc geninfo_unexecuted_blocks=1 00:27:08.487 00:27:08.487 ' 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:08.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.487 --rc genhtml_branch_coverage=1 00:27:08.487 --rc genhtml_function_coverage=1 00:27:08.487 --rc genhtml_legend=1 00:27:08.487 --rc geninfo_all_blocks=1 00:27:08.487 --rc geninfo_unexecuted_blocks=1 00:27:08.487 00:27:08.487 ' 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:08.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.487 --rc genhtml_branch_coverage=1 00:27:08.487 --rc genhtml_function_coverage=1 00:27:08.487 --rc genhtml_legend=1 00:27:08.487 --rc geninfo_all_blocks=1 00:27:08.487 --rc geninfo_unexecuted_blocks=1 00:27:08.487 00:27:08.487 ' 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:08.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.487 --rc genhtml_branch_coverage=1 00:27:08.487 --rc genhtml_function_coverage=1 00:27:08.487 --rc genhtml_legend=1 00:27:08.487 --rc geninfo_all_blocks=1 00:27:08.487 --rc geninfo_unexecuted_blocks=1 00:27:08.487 00:27:08.487 ' 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.487 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:08.488 10:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.396 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:10.397 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:10.397 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:10.397 Found net devices under 0000:09:00.0: cvl_0_0 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:10.397 Found net devices under 0000:09:00.1: cvl_0_1 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:27:10.397 00:27:10.397 --- 10.0.0.2 ping statistics --- 00:27:10.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.397 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:10.397 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:27:10.397 00:27:10.397 --- 10.0.0.1 ping statistics --- 00:27:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.398 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.398 ************************************ 00:27:10.398 START TEST nvmf_target_disconnect_tc1 00:27:10.398 ************************************ 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:10.398 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.657 [2024-11-20 10:00:47.375399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.657 [2024-11-20 10:00:47.375464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x92ef40 with addr=10.0.0.2, port=4420 00:27:10.657 [2024-11-20 10:00:47.375496] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:10.657 [2024-11-20 10:00:47.375534] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:10.657 [2024-11-20 10:00:47.375549] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:10.657 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:10.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:10.657 Initializing NVMe Controllers 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.657 00:27:10.657 real 0m0.100s 00:27:10.657 user 0m0.045s 00:27:10.657 sys 0m0.054s 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:10.657 ************************************ 00:27:10.657 END TEST nvmf_target_disconnect_tc1 00:27:10.657 ************************************ 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.657 ************************************ 00:27:10.657 START TEST nvmf_target_disconnect_tc2 00:27:10.657 ************************************ 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3850527 00:27:10.657 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:10.658 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3850527 00:27:10.658 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3850527 ']' 00:27:10.658 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.658 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.658 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.658 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.658 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.658 [2024-11-20 10:00:47.490047] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:27:10.658 [2024-11-20 10:00:47.490123] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.658 [2024-11-20 10:00:47.563245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.916 [2024-11-20 10:00:47.626626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.916 [2024-11-20 10:00:47.626678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.916 [2024-11-20 10:00:47.626692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.916 [2024-11-20 10:00:47.626702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.916 [2024-11-20 10:00:47.626713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.916 [2024-11-20 10:00:47.628329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:10.916 [2024-11-20 10:00:47.628444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:10.916 [2024-11-20 10:00:47.628512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:10.916 [2024-11-20 10:00:47.628516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.916 Malloc0 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.916 [2024-11-20 10:00:47.817455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.916 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.175 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:11.175 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.175 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.176 [2024-11-20 10:00:47.845725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3850553 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:11.176 10:00:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:13.091 10:00:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3850527 00:27:13.091 10:00:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Write completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.091 starting I/O failed 00:27:13.091 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 [2024-11-20 10:00:49.872174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 [2024-11-20 10:00:49.872510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Write completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 Read completed with error (sct=0, sc=8) 00:27:13.092 starting I/O failed 00:27:13.092 [2024-11-20 10:00:49.872845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.092 [2024-11-20 10:00:49.873035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.092 [2024-11-20 10:00:49.873076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.092 qpair failed and we were unable to recover it. 00:27:13.092 [2024-11-20 10:00:49.873181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.092 [2024-11-20 10:00:49.873207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.092 qpair failed and we were unable to recover it. 00:27:13.092 [2024-11-20 10:00:49.873313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.092 [2024-11-20 10:00:49.873344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.092 qpair failed and we were unable to recover it. 00:27:13.092 [2024-11-20 10:00:49.873447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.092 [2024-11-20 10:00:49.873473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.092 qpair failed and we were unable to recover it. 00:27:13.092 [2024-11-20 10:00:49.873563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.092 [2024-11-20 10:00:49.873596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.092 qpair failed and we were unable to recover it. 00:27:13.092 [2024-11-20 10:00:49.873698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.092 [2024-11-20 10:00:49.873723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.092 qpair failed and we were unable to recover it. 00:27:13.092 [2024-11-20 10:00:49.873804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.092 [2024-11-20 10:00:49.873829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.092 qpair failed and we were unable to recover it. 00:27:13.092 [2024-11-20 10:00:49.873956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.092 [2024-11-20 10:00:49.873982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.092 qpair failed and we were unable to recover it. 00:27:13.092 [2024-11-20 10:00:49.874106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.874133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.874245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.874271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.874385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.874412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.874508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.874533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.874685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.874711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.874825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.874851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.874969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.874995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.875122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.875148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.875253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.875300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.875448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.875476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.875581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.875607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.875721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.875747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.875823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.875851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.875990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.876016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.876158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.876185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.876308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.876341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.876438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.876466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.876626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.876654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.876775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.876800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.876918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.876944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.877137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.877165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.877268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.877294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.877414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.877440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.877536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.877575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.877712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.877738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.877858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.877886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.878000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.878026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.878146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.878173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.878293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.878327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.878451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.878478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.878576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.878613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.878724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.878750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.878888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.878914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.879030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.879057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.879202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.879227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.879367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.879409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.879514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.093 [2024-11-20 10:00:49.879545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.093 qpair failed and we were unable to recover it. 00:27:13.093 [2024-11-20 10:00:49.879671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.879698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.879839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.879868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.879984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.880010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.880106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.880132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.880232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.880259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.880411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.880437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.880526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.880553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.880638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.880664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.880778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.880804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.880922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.880948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.881064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.881090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.881234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.881266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.881371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.881396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.881518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.881545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.881692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.881719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.881834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.881860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.881936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.881961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.882051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.882076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.882159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.882184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.882299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.882340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.882425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.882451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.882560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.882598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.882682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.882708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.882831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.882857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.882966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.882992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.883107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.883133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.883223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.883250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.883380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.883406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.883483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.883509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.883600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.883625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.883717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.883743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.883822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.883848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.883965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.883991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.884100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.884126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.884252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.884292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.884402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.884429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.884528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.094 [2024-11-20 10:00:49.884555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.094 qpair failed and we were unable to recover it. 00:27:13.094 [2024-11-20 10:00:49.884644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.884672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.884771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.884798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.884886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.884913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.885028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.885056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.885167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.885193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Write completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 Read completed with error (sct=0, sc=8) 00:27:13.095 starting I/O failed 00:27:13.095 [2024-11-20 10:00:49.885516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:13.095 [2024-11-20 10:00:49.885609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.885637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.885755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.885781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.885903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.885934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.886060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.886087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.886207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.886233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.886327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.886354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.886436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.886462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.886584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.886611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.886703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.886729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.886816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.886842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.886963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.886990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.887073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.887099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.887222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.887249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.887367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.887395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.887517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.887546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.887652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.887695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.887826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.887854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.887963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.887989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.888133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.888158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.888255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.888281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.888411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.888438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.095 [2024-11-20 10:00:49.888555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.095 [2024-11-20 10:00:49.888581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.095 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.888670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.888696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.888826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.888852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.888971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.888998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.889119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.889145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.889254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.889281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.889370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.889398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.889515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.889541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.889652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.889690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.889817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.889844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.889961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.889987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.890088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.890115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.890204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.890230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.890428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.890455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.890546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.890571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.890649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.890675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.890793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.890819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.890918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.890944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.891033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.891059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.891167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.891193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.891297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.891345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.891477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.891510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.891660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.891687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.891780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.891807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.891924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.891951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.892066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.892093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.892211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.892237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.892357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.892383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.892481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.892509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.892651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.892677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.892789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.892815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.892898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.096 [2024-11-20 10:00:49.892925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.096 qpair failed and we were unable to recover it. 00:27:13.096 [2024-11-20 10:00:49.893015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.893043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.893163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.893190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.893283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.893319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.893451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.893477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.893594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.893619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.893819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.893844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.893994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.894045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.894159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.894186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.894332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.894358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.894473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.894499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.894610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.894636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.894720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.894745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.894853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.894878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.894960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.894985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.895067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.895093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.895205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.895231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.895361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.895387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.895529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.895555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.895660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.895685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.895808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.895833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.895924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.895950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.896094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.896123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.896234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.896260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.896354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.896381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.896471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.896496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.896584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.896611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.896694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.896721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.896839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.896866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.896968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.897007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.897113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.897147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.897238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.897266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.897394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.897421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.897511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.897536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.897652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.897677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.897778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.897805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.897920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.897946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.898060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.898086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.097 qpair failed and we were unable to recover it. 00:27:13.097 [2024-11-20 10:00:49.898163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.097 [2024-11-20 10:00:49.898189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.898300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.898335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.898459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.898487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.898605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.898631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.898745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.898771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.898865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.898891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.899019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.899047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.899162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.899188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.899308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.899335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.899432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.899458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.899546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.899573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.899654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.899680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.899822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.899848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.899992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.900018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.900098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.900125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.900216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.900244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.900343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.900371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.900498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.900526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.900684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.900725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.900871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.900898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.901015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.901040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.901157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.901184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.901298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.901329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.901447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.901474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.901660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.901686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.901801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.901827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.901945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.901971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.902049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.902075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.902192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.902217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.902325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.902351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.902493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.902519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.902630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.902657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.902739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.902771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.902885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.902911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.903002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.903028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.903147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.903175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.903270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.903295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.098 [2024-11-20 10:00:49.903400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.098 [2024-11-20 10:00:49.903425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.098 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.903565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.903590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.903678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.903704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.903827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.903852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.903938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.903965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.904076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.904101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.904300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.904331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.904474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.904499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.904593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.904618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.904746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.904771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.904892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.904920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.905038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.905064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.905176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.905201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.905313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.905340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.905483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.905508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.905620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.905646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.905794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.905821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.906019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.906045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.906157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.906182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.906273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.906299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.906452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.906478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.906590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.906615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.906710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.906737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.906824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.906850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.906993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.907019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.907097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.907123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.907211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.907236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.907332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.907358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.907474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.907501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.907579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.907605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.907723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.907749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.907870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.907898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.907980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.908006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.908121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.908147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.908297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.908330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.099 qpair failed and we were unable to recover it. 00:27:13.099 [2024-11-20 10:00:49.908440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.099 [2024-11-20 10:00:49.908471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.908578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.908605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.908696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.908722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.908833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.908858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.908973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.908999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.909093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.909122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.909216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.909242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.909355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.909381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.909499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.909525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.909636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.909662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.909785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.909811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.909906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.909934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.910016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.910042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.910155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.910180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.910276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.910309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.910397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.910421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.910501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.910527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.910667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.910693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.910814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.910840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.910983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.911008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.911107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.911135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.911252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.911280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.911393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.911432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.911558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.911585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.911695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.911724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.911818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.911846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.911970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.912004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.912166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.912195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.912331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.912357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.912479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.912505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.912587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.912613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.912750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.912776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.912910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.912944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.913137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.913163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.913272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.100 [2024-11-20 10:00:49.913320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.100 qpair failed and we were unable to recover it. 00:27:13.100 [2024-11-20 10:00:49.913475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.913504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.913596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.913622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.913753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.913779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.913918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.913962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.914102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.914129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.914242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.914268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.914404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.914432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.914576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.914602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.914721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.914747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.914884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.914911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.915050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.915076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.915165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.915192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.915324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.915368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.915506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.915533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.915642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.915689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.915832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.915880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.916016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.916066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.916159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.916184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.916308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.916349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.916476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.916501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.916616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.916642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.916800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.916828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.916946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.916974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.917094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.917121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.917242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.917280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.917371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.917399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.917517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.917544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.917703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.917753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.917859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.917886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.918031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.918060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.918157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.918184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.101 qpair failed and we were unable to recover it. 00:27:13.101 [2024-11-20 10:00:49.918307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.101 [2024-11-20 10:00:49.918333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.918447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.918477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.918588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.918613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.918777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.918824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.918975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.919002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.919141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.919168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.919287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.919326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.919466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.919493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.919580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.919606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.919691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.919717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.919835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.919860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.920000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.920026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.920218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.920246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.920413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.920455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.920602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.920633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.920759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.920787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.920922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.920971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.921061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.921087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.921201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.921227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.921341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.921368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.921487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.921513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.921598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.921624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.921769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.921795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.921915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.921942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.922062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.922090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.922211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.922237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.922392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.922433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.922523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.922550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.922676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.922717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.922844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.922872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.923012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.923040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.923155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.923181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.923268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.923294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.923412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.102 [2024-11-20 10:00:49.923438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.102 qpair failed and we were unable to recover it. 00:27:13.102 [2024-11-20 10:00:49.923578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.923606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.923699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.923726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.923846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.923872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.923992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.924018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.924162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.924190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.924282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.924317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.924439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.924465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.924547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.924578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.924697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.924724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.924851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.924879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.925020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.925047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.925124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.925150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.925280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.925328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.925451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.925480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.925614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.925659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.925775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.925800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.925887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.925914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.925993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.926019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.926171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.926198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.926355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.926387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.926479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.926506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.926594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.926620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.926736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.926762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.926849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.926876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.926991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.927017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.927128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.927155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.927316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.927360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.927471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.927497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.927666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.927715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.927806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.927831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.927940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.927966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.928104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.928132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.928247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.928273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.928369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.928396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.928544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.928576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.928694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.928721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.103 [2024-11-20 10:00:49.928845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.103 [2024-11-20 10:00:49.928871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.103 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.929030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.929064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.929237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.929274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.929406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.929433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.929575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.929602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.929722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.929747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.929888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.929924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.930128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.930178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.930287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.930326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.930468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.930496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.930612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.930638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.930808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.930856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.930973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.931020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.931174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.931202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.931317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.931343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.931436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.931461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.931579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.931606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.931724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.931750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.931867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.931892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.932001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.932026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.932165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.932191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.932291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.932336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.932436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.932464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.932626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.932676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.932791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.932842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.933022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.933072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.933170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.933196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.933311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.933338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.933458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.933484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.933620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.933657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.933809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.933846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.933985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.934023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.934164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.934192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.104 qpair failed and we were unable to recover it. 00:27:13.104 [2024-11-20 10:00:49.934284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.104 [2024-11-20 10:00:49.934321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.934464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.934499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.934616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.934642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.934782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.934810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.934950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.934977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.935065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.935090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.935294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.935328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.935446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.935473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.935587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.935613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.935736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.935764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.935862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.935887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.935999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.936025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.936140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.936166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.936260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.936285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.936383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.936409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.936525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.936550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.936664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.936689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.936814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.936842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.936985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.937012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.937109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.937135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.937233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.937259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.937372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.937399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.937511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.937536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.937615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.937640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.937748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.937773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.937865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.937891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.937979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.938004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.938138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.938177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.938337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.938378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.938478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.938506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.938644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.938671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.938785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.938811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.938929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.938960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.939079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.939108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.939196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.939222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.939361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.939388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.939478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.939504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.105 [2024-11-20 10:00:49.939589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.105 [2024-11-20 10:00:49.939615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.105 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.939695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.939720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.939812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.939840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.939985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.940032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.940175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.940200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.940346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.940373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.940456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.940481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.940624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.940650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.940794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.940842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.940939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.940967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.941052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.941078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.941192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.941217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.941364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.941390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.941536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.941564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.941678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.941704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.941817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.941843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.941966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.941994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.942132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.942173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.942297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.942338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.942480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.942508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.942606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.942632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.942717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.942743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.942852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.942882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.942992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.943018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.943102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.943127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.943213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.943240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.943349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.943375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.943492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.943517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.943628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.106 [2024-11-20 10:00:49.943654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.106 qpair failed and we were unable to recover it. 00:27:13.106 [2024-11-20 10:00:49.943792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.943818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.943918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.943957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.944084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.944115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.944246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.944275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.944399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.944426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.944519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.944545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.944684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.944710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.944859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.944908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.945031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.945057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.945173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.945200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.945289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.945320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.945432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.945457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.945596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.945622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.945742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.945769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.945882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.945907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.945999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.946024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.946170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.946211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.946340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.946367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.946490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.946517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.946651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.946679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.946805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.946832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.946954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.946982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.947123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.947171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.947289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.947320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.947463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.947490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.947609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.947634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.947747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.947773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.947870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.947895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.948016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.948043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.948151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.948177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.948288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.948337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.948437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.948464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.948581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.948606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.948720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.948751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.948896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.948924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.949043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.949068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.949161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.107 [2024-11-20 10:00:49.949188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.107 qpair failed and we were unable to recover it. 00:27:13.107 [2024-11-20 10:00:49.949325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.949353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.949470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.949497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.949641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.949666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.949848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.949901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.950039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.950065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.950146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.950171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.950257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.950284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.950400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.950426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.950538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.950564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.950676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.950702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.950795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.950820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.950939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.950965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.951075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.951100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.951213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.951239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.951358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.951397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.951564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.951604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.951725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.951754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.951866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.951894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.952035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.952064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.952179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.952205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.952329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.952446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.952473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.952617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.952645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.952768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.952796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.952889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.952916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.953057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.953085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.953179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.953207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.953331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.953359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.953449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.953475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.953595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.953621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.953759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.953785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.953899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.953926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.954014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.954040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.954152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.954178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.954272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.954298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.954394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.954420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.954540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.954567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.108 [2024-11-20 10:00:49.954698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.108 [2024-11-20 10:00:49.954725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.108 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.954836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.954862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.954953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.954980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.955059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.955085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.955230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.955256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.955354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.955381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.955499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.955525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.955629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.955655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.955797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.955824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.955940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.955967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.956091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.956119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.956239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.956266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.956419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.956447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.956567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.956619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.956779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.956808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.956925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.956952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.957050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.957078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.957216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.957243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.957364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.957391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.957508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.957536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.957657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.957684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.957777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.957803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.957945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.957973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.958109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.958137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.958232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.958258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.958388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.958415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.958528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.958560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.958657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.958682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.958898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.958955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.959076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.959102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.959236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.959263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.959373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.959401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.959546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.959592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.959759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.959830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.959958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.960016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.960134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.960162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.960257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.960284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.960387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.960413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.109 [2024-11-20 10:00:49.960553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.109 [2024-11-20 10:00:49.960580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.109 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.960695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.960723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.960848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.960877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.960994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.961020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.961106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.961132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.961273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.961300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.961431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.961459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.961551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.961577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.961688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.961713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.961822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.961849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.961941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.961967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.962084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.962113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.962245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.962287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.962422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.962451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.962539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.962565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.962688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.962714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.962854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.962881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.963018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.963045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.963145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.963184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.963336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.963368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.963461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.963487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.963608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.963636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.963762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.963790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.963906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.963936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.964077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.964105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.964247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.964275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.964401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.964429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.964529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.964557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.964699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.964731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.964851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.964878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.965057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.965084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.965196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.965229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.965345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.965371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.965489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.965514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.965600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.965625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.965767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.965803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.110 [2024-11-20 10:00:49.965922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.110 [2024-11-20 10:00:49.965968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.110 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.966084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.966127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.966247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.966274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.966365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.966391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.966506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.966531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.966642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.966667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.966809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.966837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.966923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.966948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.967059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.967084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.967204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.967230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.967381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.967409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.967497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.967523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.967635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.967660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.967767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.967792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.967937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.967985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.968199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.968225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.968367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.968394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.968486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.968512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.968629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.968654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.968799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.968830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.968951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.968977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.969086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.969112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.969247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.969283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.969417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.969444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.969586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.969613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.969729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.969758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.969893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.969930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.970084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.970131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.970313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.111 [2024-11-20 10:00:49.970362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.111 qpair failed and we were unable to recover it. 00:27:13.111 [2024-11-20 10:00:49.970470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.970496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.970581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.970606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.970712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.970739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.970875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.970911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.971056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.971084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.971174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.971199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.971296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.971329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.971438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.971463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.971547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.971573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.971685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.971712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.971803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.971828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.971912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.971937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.972096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.972143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.972260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.972287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.972437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.972465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.972547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.972573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.972660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.972686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.972800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.972827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.972950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.972977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.973084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.973120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.973239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.973282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.973409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.973438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.973523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.973548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.973657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.973682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.973852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.973888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.974050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.974086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.974289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.974324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.974451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.974478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.974569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.974595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.974687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.974713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.974841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.974878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.975077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.975113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.975292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.975327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.975468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.975495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.975634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.975661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.975775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.975802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.975939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.975967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.976104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.112 [2024-11-20 10:00:49.976151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.112 qpair failed and we were unable to recover it. 00:27:13.112 [2024-11-20 10:00:49.976264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.976291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.976420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.976448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.976562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.976589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.976742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.976778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.976895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.976931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.977094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.977130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.977260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.977288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.977427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.977454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.977541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.977567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.977685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.977711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.977819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.977846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.977960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.977987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.978068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.978093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.978211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.978238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.978350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.978377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.978517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.978543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.978683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.978719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.978900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.978937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.979107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.979143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.979326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.979363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.979535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.979577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.979754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.979781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.979866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.979891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.980000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.980027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.980143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.980182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.980353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.980382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.980523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.980550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.980689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.980738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.980879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.113 [2024-11-20 10:00:49.980906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-11-20 10:00:49.981036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.981072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.981251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.981287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.981448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.981484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.981639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.981666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.981805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.981832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.981948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.981973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.982113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.982140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.982280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.982326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.982478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.982514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.982633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.982671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.982831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.982867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.983027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.983063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.983213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.983249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.983379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.983416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.983558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.983594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.983769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.983806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.983924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.983961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.984116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.984143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.984280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.984319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.984500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.984536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.984711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.984738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.984854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.984880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.985022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.985066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.985185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.985213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.985338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.985374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.985504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.985540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.985733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.985760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.985874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.985901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.986007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.114 [2024-11-20 10:00:49.986053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-11-20 10:00:49.986165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.986191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.986299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.986335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.986417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.986443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.986546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.986572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.986680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.986707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.986816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.986843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.986963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.986990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.987079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.987104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.987190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.987215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.987330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.987356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.987498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.115 [2024-11-20 10:00:49.987525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-11-20 10:00:49.987615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.406 [2024-11-20 10:00:49.987641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.406 qpair failed and we were unable to recover it. 00:27:13.406 [2024-11-20 10:00:49.987764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.406 [2024-11-20 10:00:49.987791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.406 qpair failed and we were unable to recover it. 00:27:13.406 [2024-11-20 10:00:49.987881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.406 [2024-11-20 10:00:49.987906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.406 qpair failed and we were unable to recover it. 00:27:13.406 [2024-11-20 10:00:49.988004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.406 [2024-11-20 10:00:49.988030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.406 qpair failed and we were unable to recover it. 00:27:13.406 [2024-11-20 10:00:49.988112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.988137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.988223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.988248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.988349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.988375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.988469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.988494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.988607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.988633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.988741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.988782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.988881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.988909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.989025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.989059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.989173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.989207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.989376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.989413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.989564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.989601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.989842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.989869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.990025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.990090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.990214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.990249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.990494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.990552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.990746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.990773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.990884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.990910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.990989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.991014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.991183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.991220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.991406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.991464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.991709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.991767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.991977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.992034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.992182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.992218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.992334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.992370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.992570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.992643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.992827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.992855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.992945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.992970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.993077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.993102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.993218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.993245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.993379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.993442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.993548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.993577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.993722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.993760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.993961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.993989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.994105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.994133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.994229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.994255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.994437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.994475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.994612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.407 [2024-11-20 10:00:49.994674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.407 qpair failed and we were unable to recover it. 00:27:13.407 [2024-11-20 10:00:49.994801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.994850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.994958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.994984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.995107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.995133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.995250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.995276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.995379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.995406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.995496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.995540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.995684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.995721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.995811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.995836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.995940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.995966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.996105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.996131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.996279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.996334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.996430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.996455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.996538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.996563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.996680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.996709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.996820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.996847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.996937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.996962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.997074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.997101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.997232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.997268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.997411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.997448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.997631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.997667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.997815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.997851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.998012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.998039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.998134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.998160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.998280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.998317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.998458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.998485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.998621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.998657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.998809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.998845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.998981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.999030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.999149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.999176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.999320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.999368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.999445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.999470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.999603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.999640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.999783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:49.999819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:49.999973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:50.000009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:50.000196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:50.000223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:50.000367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:50.000395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:50.000483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:50.000508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.408 [2024-11-20 10:00:50.000597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.408 [2024-11-20 10:00:50.000623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.408 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.000704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.000730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.000834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.000861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.000971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.000998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.001110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.001136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.001253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.001279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.001430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.001473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.001660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.001696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.001844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.001879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.002014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.002047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.002187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.002220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.002332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.002367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.002496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.002523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.002725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.002751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.002840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.002865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.003031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.003095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.003282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.003314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.003408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.003434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.003551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.003577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.003667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.003695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.003814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.003840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.003953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.003990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.004125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.004165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.004262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.004289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.004401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.004428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.004526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.004553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.004664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.004702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.004807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.004832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.004921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.004947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.005056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.005082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.005200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.005269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.005436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.005486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.005659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.005708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.005920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.005966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.006125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.006173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.006397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.006435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.006573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.006610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.006773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.006821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.409 qpair failed and we were unable to recover it. 00:27:13.409 [2024-11-20 10:00:50.007040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.409 [2024-11-20 10:00:50.007087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.007236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.007272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.007389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.007428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.007581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.007642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.007846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.007893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.008111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.008158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.008320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.008378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.008490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.008525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.008720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.008766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.008951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.008997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.009223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.009269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.009472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.009509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.009726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.009773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.009915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.009962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.010133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.010181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.010392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.010430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.010557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.010611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.010760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.010807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.011031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.011079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.011261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.011319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.011467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.011503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.011646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.011692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.011917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.011964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.012120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.012167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.012376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.012422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.012568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.012624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.012807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.012843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.012982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.013042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.013213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.013260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.013418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.013455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.013578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.013616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.013739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.013791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.013963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.014010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.014165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.014212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.014385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.014421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.014533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.014569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.014752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.014799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.014939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.410 [2024-11-20 10:00:50.014986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.410 qpair failed and we were unable to recover it. 00:27:13.410 [2024-11-20 10:00:50.015135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.015195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.015389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.015427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.015600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.015646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.015860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.015907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.016060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.016126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.016367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.016404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.016517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.016553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.016711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.016760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.017012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.017059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.017216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.017262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.017453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.017491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.017634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.017682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.017864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.017900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.018074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.018122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.018301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.018376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.018496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.018533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.018667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.018704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.018875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.018911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.019056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.019092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.019296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.019367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.019516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.019553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.019679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.019717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.019893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.019929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.020098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.020145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.020334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.020386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.020504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.020540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.020657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.020700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.020884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.020931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.021148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.021196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.021372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.021409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.021536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.021574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.021767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.021813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.411 [2024-11-20 10:00:50.021990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.411 [2024-11-20 10:00:50.022038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.411 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.022178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.022226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.022419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.022457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.022601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.022647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.022816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.022852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.023002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.023061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.023243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.023289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.023450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.023487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.023674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.023721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.023877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.023923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.024089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.024135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.024319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.024367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.024546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.024592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.024827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.024873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.025025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.025072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.025263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.025325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.025516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.025562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.025712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.025759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.025942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.025991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.026178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.026224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.026395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.026442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.026648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.026695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.026885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.026932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.027147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.027194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.027387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.027435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.027592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.027639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.027779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.027824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.028009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.028055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.028240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.028286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.028491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.028537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.028729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.028777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.028934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.028983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.029206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.029253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.029416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.029465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.029652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.029709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.029929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.029975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.030160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.030207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.030363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.030413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.412 [2024-11-20 10:00:50.030588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.412 [2024-11-20 10:00:50.030636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.412 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.030782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.030828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.031040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.031087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.031251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.031297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.031528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.031574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.031719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.031765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.031949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.031996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.032157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.032203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.032360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.032410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.032568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.032615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.032804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.032851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.033034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.033080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.033239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.033286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.033461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.033511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.033661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.033709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.033935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.033981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.034194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.034240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.034435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.034482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.034663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.034709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.034846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.034892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.035087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.035133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.035321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.035368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.035565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.035611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.035798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.035845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.036056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.036102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.036287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.036349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.036483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.036530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.036746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.036793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.037017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.037063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.037251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.037297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.037526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.037571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.037788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.037835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.037992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.038041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.038269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.038348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.038585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.038643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.038840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.038901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.039110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.039182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.039396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.039466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.413 qpair failed and we were unable to recover it. 00:27:13.413 [2024-11-20 10:00:50.039705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.413 [2024-11-20 10:00:50.039789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.040008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.040077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.040326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.040388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.040754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121bf30 is same with the state(6) to be set 00:27:13.414 [2024-11-20 10:00:50.041488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.041557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.041733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.041784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.042034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.042085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.042293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.042354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.042519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.042565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.042745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.042791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.042973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.043030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.043246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.043297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.043508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.043565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.043772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.043824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.044053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.044103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.044286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.044362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.044617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.044666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.044853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.044905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.045144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.045193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.045411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.045461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.045688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.045737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.046006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.046055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.046316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.046364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.046536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.046603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.046816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.046864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.047070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.047117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.047285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.047364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.047584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.047630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.047803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.047850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.048008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.048054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.048281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.048346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.048579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.048817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.048867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.049051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.049102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.049292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.049357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.049558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.049607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.049787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.049837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.049988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.050036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.414 [2024-11-20 10:00:50.050230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.414 qpair failed and we were unable to recover it. 00:27:13.414 [2024-11-20 10:00:50.050447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.050498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.050691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.050739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.050964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.051013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.051177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.051231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.051445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.051495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.051645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.051696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.051862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.051912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.052153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.052202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.052403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.052453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.052629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.052678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.052821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.052871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.053058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.053109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.053315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.053366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.053539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.053597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.053805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.053855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.054038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.054087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.054279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.054340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.054498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.054550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.054739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.054788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.054970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.055018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.055215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.055267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.055491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.055541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.055740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.055789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.055958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.056008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.056196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.056246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.056459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.056512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.056678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.056726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.056928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.056980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.057171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.057221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.057399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.057449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.057607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.057656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.057851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.057901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.058155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.058204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.058364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.058417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.058611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.058661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.058839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.058889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.059134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.059184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.415 [2024-11-20 10:00:50.059345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.415 [2024-11-20 10:00:50.059395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.415 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.059549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.059599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.059795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.059844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.060003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.060052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.060211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.060262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.060478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.060529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.060719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.060769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.060963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.061012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.061204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.061253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.061427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.061477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.061721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.061770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.061973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.062023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.062251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.062300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.062512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.062562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.062753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.062804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.062968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.063017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.063189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.063246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.063480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.063530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.063775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.063824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.064049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.064098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.064294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.064359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.064518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.064567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.064758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.064808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.065009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.065058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.065245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.065294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.065466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.065518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.065676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.065726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.065908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.065958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.066122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.066173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.066404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.066455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.066723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.066772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.066961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.067010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.067213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.067263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.067434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.067484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.067677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.416 [2024-11-20 10:00:50.067727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.416 qpair failed and we were unable to recover it. 00:27:13.416 [2024-11-20 10:00:50.067920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.067971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.068164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.068215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.068426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.068481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.068651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.068703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.068946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.068995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.069223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.069272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.069512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.069562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.069758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.069807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.070047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.070097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.070369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.070420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.070646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.070696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.070879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.070928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.071092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.071144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.071321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.071372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.071567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.071618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.071815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.071864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.072090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.072141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.072401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.072451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.072644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.072693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.072935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.072984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.073166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.073216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.073389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.073448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.073642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.073690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.073886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.073941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.074152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.074201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.074391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.074441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.074607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.074656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.074882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.074931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.075084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.075133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.075315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.075365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.075613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.075662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.075856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.075907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.076105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.076157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.076329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.076381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.076572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.076620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.076820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.076871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.077019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.077069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.417 qpair failed and we were unable to recover it. 00:27:13.417 [2024-11-20 10:00:50.077242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.417 [2024-11-20 10:00:50.077291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.077527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.077577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.077765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.077814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.078037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.078085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.078345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.078396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.078596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.078645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.078844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.078892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.079092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.079140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.079337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.079388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.079571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.079621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.079815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.079865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.080080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.080129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.080324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.080375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.080575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.080624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.080822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.080871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.081071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.081120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.081285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.081346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.081543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.081592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.081773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.081821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.082011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.082061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.082202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.082251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.082434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.082483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.082676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.082727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.082910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.082960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.083141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.083199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.083396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.083448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.083635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.083683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.083864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.083913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.084110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.084160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.084350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.084399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.084625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.084674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.084885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.084934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.085113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.085161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.085324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.085374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.085576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.085625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.085800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.085849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.086034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.086083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.086338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.086388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.418 qpair failed and we were unable to recover it. 00:27:13.418 [2024-11-20 10:00:50.086599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.418 [2024-11-20 10:00:50.086648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.086836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.086884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.087067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.087116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.087354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.087404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.087627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.087678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.087875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.087924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.088120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.088169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.088330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.088381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.088567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.088616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.088781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.088831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.088994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.089043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.089271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.089341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.089565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.089615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.089849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.089899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.090089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.090139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.090340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.090391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.090614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.090663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.090825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.090873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.091024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.091073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.091269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.091351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.091559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.091607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.091796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.091845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.092037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.092086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.092273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.092339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.092574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.092623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.092810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.092859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.093067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.093125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.093350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.093401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.093544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.093593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.093810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.093860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.094015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.094066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.094232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.094281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.094502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.094549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.094708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.094755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.094969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.095014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.095158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.095204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.095366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.095414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.095563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.095618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.419 qpair failed and we were unable to recover it. 00:27:13.419 [2024-11-20 10:00:50.095780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.419 [2024-11-20 10:00:50.095827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.096008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.096055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.096247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.096315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.096527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.096573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.096768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.096813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.096961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.097007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.097203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.097249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.097419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.097466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.097663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.097849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.097895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.098108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.098154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.098342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.098406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.098560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.098603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.098811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.098855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.098987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.099031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.099217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.099262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.099481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.099560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.099800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.099868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.100072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.100135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.100341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.100389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.100530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.100556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.100643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.100669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.100768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.100794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.100906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.100932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.101023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.101049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.101162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.101189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.101275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.101319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.101440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.101466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.101560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.101599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.101736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.101762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.101902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.101928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.102046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.102073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.102188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.102214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.102328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.102355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.102450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.420 [2024-11-20 10:00:50.102477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.420 qpair failed and we were unable to recover it. 00:27:13.420 [2024-11-20 10:00:50.102606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.102632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.102756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.102782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.102865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.102893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.103013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.103039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.103123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.103150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.103245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.103271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.103399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.103426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.103518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.103545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.103646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.103673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.103791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.103826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.103920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.103947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.104066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.104093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.104201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.104227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.104326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.104354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.104467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.104493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.104567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.104593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.104683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.104709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.104803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.104829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.104928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.104954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.105070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.105232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.105272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.105428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.105457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.105606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.105633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.105723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.105751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.105861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.105888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.106008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.106035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.106166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.106194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.106345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.106372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.106469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.106496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.106626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.106652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.106770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.106797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.106888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.106916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.107066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.107093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.107175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.107207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.107318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.107346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.107467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.107493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.107615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.107642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.421 [2024-11-20 10:00:50.107756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.421 [2024-11-20 10:00:50.107783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.421 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.107872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.107900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.108027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.108054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.108164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.108190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.108314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.108341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.108461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.108488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.108605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.108632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.108720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.108746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.108853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.108879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.108971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.108998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.109083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.109109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.109213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.109239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.109368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.109395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.109476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.109503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.109621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.109647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.109787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.109813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.109955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.109981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.110094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.110120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.110204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.110230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.110338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.110379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.110530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.110558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.110681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.110711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.110846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.110872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.110995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.111022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.111106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.111133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.111259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.111286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.111417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.111443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.111554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.111580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.111683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.111708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.111899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.111925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.112020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.112046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.112156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.112182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.112373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.112399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.112482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.112509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.112628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.112654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.112744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.112769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.112848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.112879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.113022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.113048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.422 [2024-11-20 10:00:50.113238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.422 [2024-11-20 10:00:50.113264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.422 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.113417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.113444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.113548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.113574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.113678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.113705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.113853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.113879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.113994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.114020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.114138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.114164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.114250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.114275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.114390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.114431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.114562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.114590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.114714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.114743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.114838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.114864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.114968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.114994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.115080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.115107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.115221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.115247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.115356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.115383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.115499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.115525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.115632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.115657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.115750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.115776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.115864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.115890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.116008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.116036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.116126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.116153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.116266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.116307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.116425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.116451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.116590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.116616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.116746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.116773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.116890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.116915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.117018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.117045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.117179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.117205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.117296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.117327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.117419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.117445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.117586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.117613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.117756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.117782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.117880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.117907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.117998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.118024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.118125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.118151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.118233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.118259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.118390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.118417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.423 qpair failed and we were unable to recover it. 00:27:13.423 [2024-11-20 10:00:50.118500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.423 [2024-11-20 10:00:50.118530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.118646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.118672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.118758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.118784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.118911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.118937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.119061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.119087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.119175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.119202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.119291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.119328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.119449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.119476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.119571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.119597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.119717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.119743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.119862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.119889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.119979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.120004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.120144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.120170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.120249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.120275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.120385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.120412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.120525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.120551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.120670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.120696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.120777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.120804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.120911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.120937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.121060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.121085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.121179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.121205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.121287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.121319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.121415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.121441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.121548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.121574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.121661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.121687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.121803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.121830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.121925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.121951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.122061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.122110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.122214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.122242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.122341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.122368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.122460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.122487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.122568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.122594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.122707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.122734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.122853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.122880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.122966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.122992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.123076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.123104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.123311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.123358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.123486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.424 [2024-11-20 10:00:50.123528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.424 qpair failed and we were unable to recover it. 00:27:13.424 [2024-11-20 10:00:50.123698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.123743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.123864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.123906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.124071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.124121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.124309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.124353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.124519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.124560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.124710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.124753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.124960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.125002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.125146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.125189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.125386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.125428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.125591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.125632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.125785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.125827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.126010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.126052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.126255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.126297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.126450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.126492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.126615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.126656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.126835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.126877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.127058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.127099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.127266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.127315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.127484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.127528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.127715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.127757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.127923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.127964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.128110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.128153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.128268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.128317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.128512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.128554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.128735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.128777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.128924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.128966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.129107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.129148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.129286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.129350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.129515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.129556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.129766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.129809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.129976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.130019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.130200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.130241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.130390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.130432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.130579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.130620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.425 [2024-11-20 10:00:50.130813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.425 [2024-11-20 10:00:50.130855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.425 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.131014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.131056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.131202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.131245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.131420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.131462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.131632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.131674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.131843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.131885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.132037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.132078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.132250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.132292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.132474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.132522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.132642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.132685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.132870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.132912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.133048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.133101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.133271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.133320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.133463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.133506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.133674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.133746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.133946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.133994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.134162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.134209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.134353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.134401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.134554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.134600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.134777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.134823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.135010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.135058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.135273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.135339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.135543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.135592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.135815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.135860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.136019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.136065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.136240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.136287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.136489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.136535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.136708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.136755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.136971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.137017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.137181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.137227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.137499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.137547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.137702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.137748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.137944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.137991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.138144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.138190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.138416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.138464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.138610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.138664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.138836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.138883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.139057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.139103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.426 qpair failed and we were unable to recover it. 00:27:13.426 [2024-11-20 10:00:50.139291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.426 [2024-11-20 10:00:50.139353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.139547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.139595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.139765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.139811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.140007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.140053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.140191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.140237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.140466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.140515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.140724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.140770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.140964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.141010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.141217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.141263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.141482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.141553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.141773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.141831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.142026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.142073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.142224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.142272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.142439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.142486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.142730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.142776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.142954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.143010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.143200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.143245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.143483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.143530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.143733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.143781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.143944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.143998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.144224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.144270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.144489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.144540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.144714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.144761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.144936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.144983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.145211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.145258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.145470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.145522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.145726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.145773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.145983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.146030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.146165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.146212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.146380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.146429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.146588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.146637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.146827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.146874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.147009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.147055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.147251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.147299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.147472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.147518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.147685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.147730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.147903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.147952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.148129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.427 [2024-11-20 10:00:50.148187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.427 qpair failed and we were unable to recover it. 00:27:13.427 [2024-11-20 10:00:50.148380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.148428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.148577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.148624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.148784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.148830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.148993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.149040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.149219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.149265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.149451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.149483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.149616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.149647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.149807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.149838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.149952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.149984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.150087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.150120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.150247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.150278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.150424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.150456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.150663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.150709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.150869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.150900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.151036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.151080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.151269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.151327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.151504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.151535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.151730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.151777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.151942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.151989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.152215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.152262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.152445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.152477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.152648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.152695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.152867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.152900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.153103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.153150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.153348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.153381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.153519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.153551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.153719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.153767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.153962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.154009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.154153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.154221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.154460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.154567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.154637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.154837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.154868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.155026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.155081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.155266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.155325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.155488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.155520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.155714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.155760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.155907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.155957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.156201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.428 [2024-11-20 10:00:50.156255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.428 qpair failed and we were unable to recover it. 00:27:13.428 [2024-11-20 10:00:50.156450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.156482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.156596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.156634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.156745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.156776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.156914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.156967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.157113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.157195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.157435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.157467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.157561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.157617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.157844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.157891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.158050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.158120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.158322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.158381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.158489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.158521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.158662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.158695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.158849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.158896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.159037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.159099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.159256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.159287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.159445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.159478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.159616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.159647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.159760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.159791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.159899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.159932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.160101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.160145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.160314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.160346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.160485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.160517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.160705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.160749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.160966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.161010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.161178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.161222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.161369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.161403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.161530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.161562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.161655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.161685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.161840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.161873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.162038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.162082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.162320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.162373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.162530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.162562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.162714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.162748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.162887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.162918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.429 qpair failed and we were unable to recover it. 00:27:13.429 [2024-11-20 10:00:50.163091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.429 [2024-11-20 10:00:50.163137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.163316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.163349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.163462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.163495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.163638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.163676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.163806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.163838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.163969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.164036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.164189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.164269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.164459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.164501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.164644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.164676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.164830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.164875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.165080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.165133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.165295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.165360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.165462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.165495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.165650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.165681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.165814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.165861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.166072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.166118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.166288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.166364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.166535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.166567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.166729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.166784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.166960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.167004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.167137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.167184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.167371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.167404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.167501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.167534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.167651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.167683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.167869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.167912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.168055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.168113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.168258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.168320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.168474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.168507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.168639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.168670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.168792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.168848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.169033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.169078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.169244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.169276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.169397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.169430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.169599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.169632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.169743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.169774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.169924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.169989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.170122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.170192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.170399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.430 [2024-11-20 10:00:50.170432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.430 qpair failed and we were unable to recover it. 00:27:13.430 [2024-11-20 10:00:50.170564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.170596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.170725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.170758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.170894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.170940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.171124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.171172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.171360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.171392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.171521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.171552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.171704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.171758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.171909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.171954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.172209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.172256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.172423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.172462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.172625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.172657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.172860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.172893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.173025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.173090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.173329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.173379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.173512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.173544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.173682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.173739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.173931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.174010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.174180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.174255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.174488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.174520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.174629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.174691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.174963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.175008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.175175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.175240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.175438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.175471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.175618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.175651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.175854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.175897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.176059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.176103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.176273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.176326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.176451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.176484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.176639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.176685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.176876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.176908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.177012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.177046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.177215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.177259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.177421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.177454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.177599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.177631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.177799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.177857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.178030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.178075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.178267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.178327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.431 [2024-11-20 10:00:50.178462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.431 [2024-11-20 10:00:50.178494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.431 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.178609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.178640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.178816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.178870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.179047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.179091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.179248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.179279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.179414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.179446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.179593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.179639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.179788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.179819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.179954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.179987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.180180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.180225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.180400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.180432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.180594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.180647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.180831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.180891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.181078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.181122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.181274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.181331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.181513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.181558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.181738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.181782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.181972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.182048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.182199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.182244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.182432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.182477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.182619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.182665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.182848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.182892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.183075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.183121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.183337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.183382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.183572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.183618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.183760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.183808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.184022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.184065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.184200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.184243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.184423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.184468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.184612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.184660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.184861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.184905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.185103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.185147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.185358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.185402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.185549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.185591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.185795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.185839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.186006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.186080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.186253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.186297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.186513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.186556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.432 qpair failed and we were unable to recover it. 00:27:13.432 [2024-11-20 10:00:50.186737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.432 [2024-11-20 10:00:50.186781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.186944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.186989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.187164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.187208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.187383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.187428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.187601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.187645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.187786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.187829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.188033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.188077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.188204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.188248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.188449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.188494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.188691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.188735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.188919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.188964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.189138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.189184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.189367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.189411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.189589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.189635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.189839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.189890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.190102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.190146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.190370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.190414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.190627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.190671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.190863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.190919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.191080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.191124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.191318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.191364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.191530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.191575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.191740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.191785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.191910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.191954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.192151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.192195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.192348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.192392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.192537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.192584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.192718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.192764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.192943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.192988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.193175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.193220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.193494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.193563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.193771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.193827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.194044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.194101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.194292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.194346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.194491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.194534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.194693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.194737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.194914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.194959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.195136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.195179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.433 qpair failed and we were unable to recover it. 00:27:13.433 [2024-11-20 10:00:50.195323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.433 [2024-11-20 10:00:50.195366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.195512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.195558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.195770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.195813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.195971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.196015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.196195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.196240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.196452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.196497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.196675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.196719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.196902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.196947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.197128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.197175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.197370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.197415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.197596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.197640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.197772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.197816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.197959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.198002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.198182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.198225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.198398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.198470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.198637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.198708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.198896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.198961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.199152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.199210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.199411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.199455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.199607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.199651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.199782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.199827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.200012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.200057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.200266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.200349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.200502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.200543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.200742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.200797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.201014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.201069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.201285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.201367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.201573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.201616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.201800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.201844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.201989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.202034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.202220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.202276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.202476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.202520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.202682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.202727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.202864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.202910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.434 [2024-11-20 10:00:50.203057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.434 [2024-11-20 10:00:50.203101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.434 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.203277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.203334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.203488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.203531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.203748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.203793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.203944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.203989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.204166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.204217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.204388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.204434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.204610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.204654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.204833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.204877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.205028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.205072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.205254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.205298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.205486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.205549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.205808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.205878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.206122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.206178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.206412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.206470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.206658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.206728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.206944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.207021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.207250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.207317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.207596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.207670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.207931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.208006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.208221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.208278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.208458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.208502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.208715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.208806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.209042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.209117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.209347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.209391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.209578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.209623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.209833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.209877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.210028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.210071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.210258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.210311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.210453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.210505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.210701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.210744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.210907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.210950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.211093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.211136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.211282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.211340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.211469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.211513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.211682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.211726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.211899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.211943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.212095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.212142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.435 [2024-11-20 10:00:50.212364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.435 [2024-11-20 10:00:50.212408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.435 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.212559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.212630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.212854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.212898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.213015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.213059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.213268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.213321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.213480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.213525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.213755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.213811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.214044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.214087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.214270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.214323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.214498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.214543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.214673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.214714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.214890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.214934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.215083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.215126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.215276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.215330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.215504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.215550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.215728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.215771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.215941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.215984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.216173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.216217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.216450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.216504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.216733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.216798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.216988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.217034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.217197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.217250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.217450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.217505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.217672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.217716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.217885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.217938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.218115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.218160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.218297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.218351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.218510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.218554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.218707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.218751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.218948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.218994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.219142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.219187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.219407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.219452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.219610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.219654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.219868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.219922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.220063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.220108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.220283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.220340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.220488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.220521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.220693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.220725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.436 qpair failed and we were unable to recover it. 00:27:13.436 [2024-11-20 10:00:50.220850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.436 [2024-11-20 10:00:50.220908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.221099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.221142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.221320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.221377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.221515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.221549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.221700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.221750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.221893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.221936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.222145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.222188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.222353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.222388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.222529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.222561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.222682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.222727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.222902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.222945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.223085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.223139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.223333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.223382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.223508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.223546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.223717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.223772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.223922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.223977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.224130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.224161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.224321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.224354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.224458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.224491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.224626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.224680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.224876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.224936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.225064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.225097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.225237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.225270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.225400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.225464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.225611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.225654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.225753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.225786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.225897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.225936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.226062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.226095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.226210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.226243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.226377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.226412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.226549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.226593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.226765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.226808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.227023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.227077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.227264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.227298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.227417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.227450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.227619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.227672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.227845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.227904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.228132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.437 [2024-11-20 10:00:50.228184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.437 qpair failed and we were unable to recover it. 00:27:13.437 [2024-11-20 10:00:50.228377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.228427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.228569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.228602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.228774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.228819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.228968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.229012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.229200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.229246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.229423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.229456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.229585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.229637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.229788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.229845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.230009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.230052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.230211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.230253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.230405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.230439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.230574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.230607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.230802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.230846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.231067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.231111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.231275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.231346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.231497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.231531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.231652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.231685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.231810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.231861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.232044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.232087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.232261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.232293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.232393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.232426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.232564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.232627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.232825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.232858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.232980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.233037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.233216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.233259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.233404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.233435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.233579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.233615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.233734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.233766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.233917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.233958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.234125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.234168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.234312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.234365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.234528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.234561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.234749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.234783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.234903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.234940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.438 [2024-11-20 10:00:50.235067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.438 [2024-11-20 10:00:50.235119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.438 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.235288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.235331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.235493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.235525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.235655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.235691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.235841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.235876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.236020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.236053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.236189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.236223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.236378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.236413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.236566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.236602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.236716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.236749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.236868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.236902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.237007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.237041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.237182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.237224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.237347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.237397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.237499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.237532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.237652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.237687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.237819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.237852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.237989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.238024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.238184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.238217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.238362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.238395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.238504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.238537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.238663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.238696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.238809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.238842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.238983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.239015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.239179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.239212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.239369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.239403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.239547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.239580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.239720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.239752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.239880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.239921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.240087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.240129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.240298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.240365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.240473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.240506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.240656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.240689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.240852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.240893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.241052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.241101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.241243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.241276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.241411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.241444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.241567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.241603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.439 qpair failed and we were unable to recover it. 00:27:13.439 [2024-11-20 10:00:50.241717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:00:50.241752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.241904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.241940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.242086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.242121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.242245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.242278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.242408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.242442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.242573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.242608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.242725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.242761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.242871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.242907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.243023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.243058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.243237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.243270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.243403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.243436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.243580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.243614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.243750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.243785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.243927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.243961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.244075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.244133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.244291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.244332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.244463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.244498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.244652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.244686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.244878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.244920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.245080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.245121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.245265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.245298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.245441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.245473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.245638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.245691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.245839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.245887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.246093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.246156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.246334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.246384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.246795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.246853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.246973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.247007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.247148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.247179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.247338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.247370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.247474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.247507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.247618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.247651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.247816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.247849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.247983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.248035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.248212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.248266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.248436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.248468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.248598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.248638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.248802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:00:50.248844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.440 qpair failed and we were unable to recover it. 00:27:13.440 [2024-11-20 10:00:50.248971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.249012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.249195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.249228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.249361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.249394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.249535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.249588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.249705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.249738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.249883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.249917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.250101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.250135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.250284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.250351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.250469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.250501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.250672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.250707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.250862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.250904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.251048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.251088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.251284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.251334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.251494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.251526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.251731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.251765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.251901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.251935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.252126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.252182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.252295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.252351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.252463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.252494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.252633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.252684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.252837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.252877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.253021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.253072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.253251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.253282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.253404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.253437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.253555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.253587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.253696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.253736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.253917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.253958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.254122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.254176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.254317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.254350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.254460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.254494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.254619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.254650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.254800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.254852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.254966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.255000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.255146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.255187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.255389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.255423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.255546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.255578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.255708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.255741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.255913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.441 [2024-11-20 10:00:50.255953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.441 qpair failed and we were unable to recover it. 00:27:13.441 [2024-11-20 10:00:50.256084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.256124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.256287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.256368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.256504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.256538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.256676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.256708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.256837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.256894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.257089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.257123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.257272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.257335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.257505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.257537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.257650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.257682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.257783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.257837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.258062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.258097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.258252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.258312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.258453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.258487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.258670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.258704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.258893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.258946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.259140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.259182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.259367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.259402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.259514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.259548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.259688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.259731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.259895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.259947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.260114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.260148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.260313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.260345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.260469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.260501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.260632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.260684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.260798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.260832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.261019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.261055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.261179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.261228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.261354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.261393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.261518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.261550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.261687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.261742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.261913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.261957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.262119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.262161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.262356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.262389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.262504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.262537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.262688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.262722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.262906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.262957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.263115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.263156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.442 [2024-11-20 10:00:50.263322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.442 [2024-11-20 10:00:50.263355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.442 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.263488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.263521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.263670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.263702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.263887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.263921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.264106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.264147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.264320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.264370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.264475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.264507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.264639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.264671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.264819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.264869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.264989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.265021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.265132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.265164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.265338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.265373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.265478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.265510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.265638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.265671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.265788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.265821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.265927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.265959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.266085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.266119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.266262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.266296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.266420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.266455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.266597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.266631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.266734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.266766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.266883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.266916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.267057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.267207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.267239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.267383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.267417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.267531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.267563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.267663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.267696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.267832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.267864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.267996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.268028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.268186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.268246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.268396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.268436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.268581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.268613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.268751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.268783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.268927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.268960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.269095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.443 [2024-11-20 10:00:50.269128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.443 qpair failed and we were unable to recover it. 00:27:13.443 [2024-11-20 10:00:50.269243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.269275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.269397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.269446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.269594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.269625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.269751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.269783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.269922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.269955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.270071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.270104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.270241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.270272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.270433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.270465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.270583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.270624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.270747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.270781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.270918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.270950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.271112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.271145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.271260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.271292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.271442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.271475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.271607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.271640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.271770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.271803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.271916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.271947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.272062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.272096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.272200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.272232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.272343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.272388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.272533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.272567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.272672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.272705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.272842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.272876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.273016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.273048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.273152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.273185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.273325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.273359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.273490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.273522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.273724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.273758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.273891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.273923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.274062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.274094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.274229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.274260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.274385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.274424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.274545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.274576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.274741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.274772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.274881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.274915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.275024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.275068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.275220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.275272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.275410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.444 [2024-11-20 10:00:50.275460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.444 qpair failed and we were unable to recover it. 00:27:13.444 [2024-11-20 10:00:50.275608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.275641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.275822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.275856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.276008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.276053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.276164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.276198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.276345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.276399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.276534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.276567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.276662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.276697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.276871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.276905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.277029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.277061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.277261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.277308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.277479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.277513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.277667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.277701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.277840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.277879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.278018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.278058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.278211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.278250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.278386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.278426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.278583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.278627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.278775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.278814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.278965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.279005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.279138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.279177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.279325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.279365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.279489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.279530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.279732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.279771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.279933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.279972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.280114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.280166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.280322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.280366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.280500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.280535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.280749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.280783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.280952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.280987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.281081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.281115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.281254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.281288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.281431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.281466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.281617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.281672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.281784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.281819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.281950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.281990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.282125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.282179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.282291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.282336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.282514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.445 [2024-11-20 10:00:50.282560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.445 qpair failed and we were unable to recover it. 00:27:13.445 [2024-11-20 10:00:50.282710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.282750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.282905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.282945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.283079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.283134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.283246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.283279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.283427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.283478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.283593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.283625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.283836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.283876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.284057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.284092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.284238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.284285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.284455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.284515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.284705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.284751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.284925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.284960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.285080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.285116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.285334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.285375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.285501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.285540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.285706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.285741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.285874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.285917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.286043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.286077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.286232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.286272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.286477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.286512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.286656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.286690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.286849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.286885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.287078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.287137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.287335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.287380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.287607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.287659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.446 [2024-11-20 10:00:50.287827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.446 [2024-11-20 10:00:50.287872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.446 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.288001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.288037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.288252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.288297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.288421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.288455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.288570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.288615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.288808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.288845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.288953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.288989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.289108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.289146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.289290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.289361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.289477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.289512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.289702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.289742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.289905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.289945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.290123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.290163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.290356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.290400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.290531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.290583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.290752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.730 [2024-11-20 10:00:50.290807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.730 qpair failed and we were unable to recover it. 00:27:13.730 [2024-11-20 10:00:50.290989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.291029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.291238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.291273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.291426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.291480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.291646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.291685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.291829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.291874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.292074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.292114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.292273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.292341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.292479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.292522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.292693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.292733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.292892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.292934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.293069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.293110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.293279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.293327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.293520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.293559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.293716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.293755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.293891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.293944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.294072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.294106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.294250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.294290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.294462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.294516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.294664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.294697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.294823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.294863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.295021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.295059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.295258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.295290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.295427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.295463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.295649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.295683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.295850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.295884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.296066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.296106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.296244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.296293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.296447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.296481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.296594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.296629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.296746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.296779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.296895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.296929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.297031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.297085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.297240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.297275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.297445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.297481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.297662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.297701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.297858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.297898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.298027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.298065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.731 [2024-11-20 10:00:50.298190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.731 [2024-11-20 10:00:50.298229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.731 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.298411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.298458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.298592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.298630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.298769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.298808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.298964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.299004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.299164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.299206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.299370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.299410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.299604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.299642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.299763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.299816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.299936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.299970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.300114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.300148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.300277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.300353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.300523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.300562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.300731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.300764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.300901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.300935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.301095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.301136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.301312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.301376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.301535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.301574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.301718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.301770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.301885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.301920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.302108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.302143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.302262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.302297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.302428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.302466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.302600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.302640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.302796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.302835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.303032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.303066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.303204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.303238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.303386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.303422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.303551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.303586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.303737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.303769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.303935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.303972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.304100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.304140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.304295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.304341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.304476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.304513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.304676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.304719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.304925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.304960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.305092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.305126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.305247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.305280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.732 qpair failed and we were unable to recover it. 00:27:13.732 [2024-11-20 10:00:50.305434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.732 [2024-11-20 10:00:50.305476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.305627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.305692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.305834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.305877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.306025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.306074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.306290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.306333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.306473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.306507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.306648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.306684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.306814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.306859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.307028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.307068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.307230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.307271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.307449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.307489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.307657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.307696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.307854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.307905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.308057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.308119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.308244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.308299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.308467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.308507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.308643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.308696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.308850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.308884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.309034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.309075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.309242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.309283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.309510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.309544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.309664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.309699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.309844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.309878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.310012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.310047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.310180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.310214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.310367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.310405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.310519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.310555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.310672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.310717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.310844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.310883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.311015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.311055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.311204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.311245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.311376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.311418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.311581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.311622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.311783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.311823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.311972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.312011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.312194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.312231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.312375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.312411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.733 qpair failed and we were unable to recover it. 00:27:13.733 [2024-11-20 10:00:50.312593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.733 [2024-11-20 10:00:50.312627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.312728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.312762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.312939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.312979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.313119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.313153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.313260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.313300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.313405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.313439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.313606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.313652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.313826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.313860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.314002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.314036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.314194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.314228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.314400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.314463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.314663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.314697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.314839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.314874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.315068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.315104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.315205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.315239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.315438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.315474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.315579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.315622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.315742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.315777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.315942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.315993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.316141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.316181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.316343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.316404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.316580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.316620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.316783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.316823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.316983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.317061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.317253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.317329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.317544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.317580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.317692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.317735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.317920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.317959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.318108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.318141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.318282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.318353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.318517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.318557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.318684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.318724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.318893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.318943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.319087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.319122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.319281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.319330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.319464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.319503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.319625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.319678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.734 [2024-11-20 10:00:50.319823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.734 [2024-11-20 10:00:50.319858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.734 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.320022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.320073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.320217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.320251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.320448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.320484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.320590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.320630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.320736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.320770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.320927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.320971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.321133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.321172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.321315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.321371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.321487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.321527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.321727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.321762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.321877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.321912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.322054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.322090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.322223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.322262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.322419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.322458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.322637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.322682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.322832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.322867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.323009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.323044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.323207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.323247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.323375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.323417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.323552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.323591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.323805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.323840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.323972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.324006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.324196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.324241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.324397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.324449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.324575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.324616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.324776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.324816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.324963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.325003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.325173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.325212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.325367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.325402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.325518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.325553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.325700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.325735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.325872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.735 [2024-11-20 10:00:50.325906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.735 qpair failed and we were unable to recover it. 00:27:13.735 [2024-11-20 10:00:50.326036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.326070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.326207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.326259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.326372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.326407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.326624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.326659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.326881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.326946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.327167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.327202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.327358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.327393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.327534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.327574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.327749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.327784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.327922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.327958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.328136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.328176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.328346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.328386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.328491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.328526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.328675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.328710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.328897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.328937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.329100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.329140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.329342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.329392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.329538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.329572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.329712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.329746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.329919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.329969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.330169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.330204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.330332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.330367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.330517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.330551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.330695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.330731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.330883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.330917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.331029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.331063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.331203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.331239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.331426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.331461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.331602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.331636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.331808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.331843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.331949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.331983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.332140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.332174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.332368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.332416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.332560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.332594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.332742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.332778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.332949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.333030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.333244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.333283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.736 qpair failed and we were unable to recover it. 00:27:13.736 [2024-11-20 10:00:50.333470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.736 [2024-11-20 10:00:50.333509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.333689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.333729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.333867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.333907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.334080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.334114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.334243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.334278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.334475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.334520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.334729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.334764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.334871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.334906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.335073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.335114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.335298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.335343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.335499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.335551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.335734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.335810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.335974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.336014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.336142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.336212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.336394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.336435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.336605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.336645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.336809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.336849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.337014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.337052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.337225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.337264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.337445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.337493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.337649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.337688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.337825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.337864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.338029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.338069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.338199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.338254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.338402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.338437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.338556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.338596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.338730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.338769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.338898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.338938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.339128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.339167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.339321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.339360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.339516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.339556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.339672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.339714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.339874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.339915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.340090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.340129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.340294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.340344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.340504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.340543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.340674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.340714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.340888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.737 [2024-11-20 10:00:50.340928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.737 qpair failed and we were unable to recover it. 00:27:13.737 [2024-11-20 10:00:50.341047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.341088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.341235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.341274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.341441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.341480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.341610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.341648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.341790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.341829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.341988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.342028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.342192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.342232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.342419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.342460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.342606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.342647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.342808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.342859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.342999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.343033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.343132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.343166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.343316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.343351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.343572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.343614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.343803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.343844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.343985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.344027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.344152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.344194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.344328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.344370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.344532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.344572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.344720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.344760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.344891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.344930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.345098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.345137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.345281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.345338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.345505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.345544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.345690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.345730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.345866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.345908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.346029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.346071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.346264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.346313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.346433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.346473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.346654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.346689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.346788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.346822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.346945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.346978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.347120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.347171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.347317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.347353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.347531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.347575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.347779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.347815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.347958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.347992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.738 qpair failed and we were unable to recover it. 00:27:13.738 [2024-11-20 10:00:50.348172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.738 [2024-11-20 10:00:50.348212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.348343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.348385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.348513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.348554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.348726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.348767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.348914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.348954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.349123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.349162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.349365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.349407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.349541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.349580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.349761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.349801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.349943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.349983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.350141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.350181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.350289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.350354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.350508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.350548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.350722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.350762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.350962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.351007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.351162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.351197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.351368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.351448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.351626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.351665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.351787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.351839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.352011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.352045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.352196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.352262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.352494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.352563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.352733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.352800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.352996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.353036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.353158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.353198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.353338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.353379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.353545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.353586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.353755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.353795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.354004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.354039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.354183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.354239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.354456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.354497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.354629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.354671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.354831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.354872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.355044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.739 [2024-11-20 10:00:50.355078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.739 qpair failed and we were unable to recover it. 00:27:13.739 [2024-11-20 10:00:50.355192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.355226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.355330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.355364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.355527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.355567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.355723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.355762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.355934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.355969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.356125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.356160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.356292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.356343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.356523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.356563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.356734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.356768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.356878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.356913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.357056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.357097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.357284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.357342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.357468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.357507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.357666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.357707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.357830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.357872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.358036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.358076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.358237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.358279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.358475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.358523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.358654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.358694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.358849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.358888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.359053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.359093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.359223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.359263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.359451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.359492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.359665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.359700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.359864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.359900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.360083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.360118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.360310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.360450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.360490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.360693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.360734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.360909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.360959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.361072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.361273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.361317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.361443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.361498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.361676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.361710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.361866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.361900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.362045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.362079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.362327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.362386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.740 qpair failed and we were unable to recover it. 00:27:13.740 [2024-11-20 10:00:50.362547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.740 [2024-11-20 10:00:50.362588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.362763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.362800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.362932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.362968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.363101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.363135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.363273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.363326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.363462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.363502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.363632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.363672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.363838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.363884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.364071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.364110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.364283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.364336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.364483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.364519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.364686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.364725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.364884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.364924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.365043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.365083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.365205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.365245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.365448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.365489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.365643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.365683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.365789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.365829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.365974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.366014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.366186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.366221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.366391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.366433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.366570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.366611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.366800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.366875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.367040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.367079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.367244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.367283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.367456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.367496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.367662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.367702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.367873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.367914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.368047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.368111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.368336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.368376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.368534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.368574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.368719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.368759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.368918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.368960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.369130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.369171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.369371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.369412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.369558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.369592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.369712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.369747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.369852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.741 [2024-11-20 10:00:50.369886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.741 qpair failed and we were unable to recover it. 00:27:13.741 [2024-11-20 10:00:50.369988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.370044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.370206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.370246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.370423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.370458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.370607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.370642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.370775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.370814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.370979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.371018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.371239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.371273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.371456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.371492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.371637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.371675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.371841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.371881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.372051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.372092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.372255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.372296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.372468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.372508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.372652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.372692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.372854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.372895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.373060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.373106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.373282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.373359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.373569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.373608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.373765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.373804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.373963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.374004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.374240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.374398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.374437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.374609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.374659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.374792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.374832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.374979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.375020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.375135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.375175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.375365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.375405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.375517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.375552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.375719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.375754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.375892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.375935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.376046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.376085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.376249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.376288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.376455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.376496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.376692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.376733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.376856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.376896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.377071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.377111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.377285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.377341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.742 qpair failed and we were unable to recover it. 00:27:13.742 [2024-11-20 10:00:50.377476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.742 [2024-11-20 10:00:50.377516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.377679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.377719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.377911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.377965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.378148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.378188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.378355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.378398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.378536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.378576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.378710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.378749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.378934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.378973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.379137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.379171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.379298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.379343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.379517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.379556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.379712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.379751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.379879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.379919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.380051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.380090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.380250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.380292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.380446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.380486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.380601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.380641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.380795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.380835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.381031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.381072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.381229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.381268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.381410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.381471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.381601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.381646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.381774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.381817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.381947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.381988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.382154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.382216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.382448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.382498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.382664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.382703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.382897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.382937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.383052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.383104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.383259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.383294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.383446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.383511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.383696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.383737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.383883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.383924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.384045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.384109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.384334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.384376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.384502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.384543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.384699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.384774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.384939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.743 [2024-11-20 10:00:50.384979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.743 qpair failed and we were unable to recover it. 00:27:13.743 [2024-11-20 10:00:50.385112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.385152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.385347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.385405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.385531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.385571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.385703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.385743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.385900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.385954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.386069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.386104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.386317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.386353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.386502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.386536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.386708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.386749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.386886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.386925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.387117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.387157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.387371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.387406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.387547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.387600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.387791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.387833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.387975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.388018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.388209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.388249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.388405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.388447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.388618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.388657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.388816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.388880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.388997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.389030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.389230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.389269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.389421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.389466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.389675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.389710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.389832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.389868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.390013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.390053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.390213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.390253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.390396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.390439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.390567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.390622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.390794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.390836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.390968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.391008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.391172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.391213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.391406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.391446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.391569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.391609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.391742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.744 [2024-11-20 10:00:50.391784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.744 qpair failed and we were unable to recover it. 00:27:13.744 [2024-11-20 10:00:50.391975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.392014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.392166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.392206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.392370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.392416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.392571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.392611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.392747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.392788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.392953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.392993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.393121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.393161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.393319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.393371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.393534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.393576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.393772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.393812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.393932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.393972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.394187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.394232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.394371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.394412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.394569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.394637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.394861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.394896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.395014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.395049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.395184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.395225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.395423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.395465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.395583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.395623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.395780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.395820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.396013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.396049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.396184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.396218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.396375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.396414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.396547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.396586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.396747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.396786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.396948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.396997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.397163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.397201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.397369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.397419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.397581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.397620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.397759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.397799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.397979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.398018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.398179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.398262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.398499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.398571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.398765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.398819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.399071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.399137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.399353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.399394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.399527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.399568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.399741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.745 [2024-11-20 10:00:50.399780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.745 qpair failed and we were unable to recover it. 00:27:13.745 [2024-11-20 10:00:50.399940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.399978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.400129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.400205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.400373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.400413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.400567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.400606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.400768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.400808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.400945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.400993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.401129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.401197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.401408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.401443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.401578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.401612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.401744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.401787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.401948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.402003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.402195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.402270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.402510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.402543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.402663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.402698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.402855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.402915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.403065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.403130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.403300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.403389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.403606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.403644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.403780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.403817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.404009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.404048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.404177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.404251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.404451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.404486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.404613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.404646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.404811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.404852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.405028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.405093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.405266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.405358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.405494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.405533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.405720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.405759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.405933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.405967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.406125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.406187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.406352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.406392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.406578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.406617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.406803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.406842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.406966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.407003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.407161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.407199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.407362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.407416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.407584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.407622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.407809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.746 [2024-11-20 10:00:50.407848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.746 qpair failed and we were unable to recover it. 00:27:13.746 [2024-11-20 10:00:50.407986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.408024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.408185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.408226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.408408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.408448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.408611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.408652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.408811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.408850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.409040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.409078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.409245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.409283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.409477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.409521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.409698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.409738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.409878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.409919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.410084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.410125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.410322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.410366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.410533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.410575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.410753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.410792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.410920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.410963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.411164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.411204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.411342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.411383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.411584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.411617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.411726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.411759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.411932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.411979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.412160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.412193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.412371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.412406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.412559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.412612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.412757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.412800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.412973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.413014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.413154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.413235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.413434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.413476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.413612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.413651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.413815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.413849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.413993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.414027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.414254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.414331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.414513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.414553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.414694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.414738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.414889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.414930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.415128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.415167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.415361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.415402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.415596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.415635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.415796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.747 [2024-11-20 10:00:50.415854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.747 qpair failed and we were unable to recover it. 00:27:13.747 [2024-11-20 10:00:50.415985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.416027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.416169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.416209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.416355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.416396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.416561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.416603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.416764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.416805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.416972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.417015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.417186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.417225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.417469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.417534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.417753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.417794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.418003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.418068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.418345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.418411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.418637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.418678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.418845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.418886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.419064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.419107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.419247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.419287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.419441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.419481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.419647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.419690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.419855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.419896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.420037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.420076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.420233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.420273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.420415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.420470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.420611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.420645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.420818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.420858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.421038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.421081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.421238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.421281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.421429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.421473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.421625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.421668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.421815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.421859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.422036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.422080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.422258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.422318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.422512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.422545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.422696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.422729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.422958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.423022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.423238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.748 [2024-11-20 10:00:50.423280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.748 qpair failed and we were unable to recover it. 00:27:13.748 [2024-11-20 10:00:50.423516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.423558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.423809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.423874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.424095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.424160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.424359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.424406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.424584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.424629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.424834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.424873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.425018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.425067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.425206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.425249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.425473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.425526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.425653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.425688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.425876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.425918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.426090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.426132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.426310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.426354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.426523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.426566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.426716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.426761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.426920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.426964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.427129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.427171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.427331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.427374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.427510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.427567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.427762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.427805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.427947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.427989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.428147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.428190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.428366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.428412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.428595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.428659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.428848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.428892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.429075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.429118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.429265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.429318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.429497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.429540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.429674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.429717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.429889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.429933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.430066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.430111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.430315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.430360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.430542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.430588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.430791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.430834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.431016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.431059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.431232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.431274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.431457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.431500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.431725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.749 [2024-11-20 10:00:50.431772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.749 qpair failed and we were unable to recover it. 00:27:13.749 [2024-11-20 10:00:50.431959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.432008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.432213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.432259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.432470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.432515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.432714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.432757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.432885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.432929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.433103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.433147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.433325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.433369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.433568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.433619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.433767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.433815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.433977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.434023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.434222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.434268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.434436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.434484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.434701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.434751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.434859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.434893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.435097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.435174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.435356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.435405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.435621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.435655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.435801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.435835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.436024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.436071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.436240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.436286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.436471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.436518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.436686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.436733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.436949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.436995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.437185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.437230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.437391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.437440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.437622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.437669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.437857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.437893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.438011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.438046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.438236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.438285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.438467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.438516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.438686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.438732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.438914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.438961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.439102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.439150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.439358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.439409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.439581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.439630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.439772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.439821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.440004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.440052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.750 [2024-11-20 10:00:50.440242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.750 [2024-11-20 10:00:50.440335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.750 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.440519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.440580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.440831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.440891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.441132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.441178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.441363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.441415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.441642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.441688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.441875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.441921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.442104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.442149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.442329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.442377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.442564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.442601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.442723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.442761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.442944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.442989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.443174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.443218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.443369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.443416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.443602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.443660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.443849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.443895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.444047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.444093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.444322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.444369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.444525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.444568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.444797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.444846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.444986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.445038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.445173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.445207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.445362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.445409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.445589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.445634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.445797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.445854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.446048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.446092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.446230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.446277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.446499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.446559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.446808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.446854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.447036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.447086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.447270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.447348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.447550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.447596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.447745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.447792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.447981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.448028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.448210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.448255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.448422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.448469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.448617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.448665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.751 [2024-11-20 10:00:50.448858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.751 [2024-11-20 10:00:50.448904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.751 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.449083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.449130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.449325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.449371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.449527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.449574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.449787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.449836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.450039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.450090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.450333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.450386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.450532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.450582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.450761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.450810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.450975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.451025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.451203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.451253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.451490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.451525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.451670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.451703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.451853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.451901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.452077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.452125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.452273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.452346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.452515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.452564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.452720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.452771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.452937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.452986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.453137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.453184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.453383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.453438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.453609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.453659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.453851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.453886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.454006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.454040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.454214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.454262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.454459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.454512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.454712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.454759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.454957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.455004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.455190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.455238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.455449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.455498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.455749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.455785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.455941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.455974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.456175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.456224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.456448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.456506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.456764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.456838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.457075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.457132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.457384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.457436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.457642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.457692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.752 [2024-11-20 10:00:50.457842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.752 [2024-11-20 10:00:50.457891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.752 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.458082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.458131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.458282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.458350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.458557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.458607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.458831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.458872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.459018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.459052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.459172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.459204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.459345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.459381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.459599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.459634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.459781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.459815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.459935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.459968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.460166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.460218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.460462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.460512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.460742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.460789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.460945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.460997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.461193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.461243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.461427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.461477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.461657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.461706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.461902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.461951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.462083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.462130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.462331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.462380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.462566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.462614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.462838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.462886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.463110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.463157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.463357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.463406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.463592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.463626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.463741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.463774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.463884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.463919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.464093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.464149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.464384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.464438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.464663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.464715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.464917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.464966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.465144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.465193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.465380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.465429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.465600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.465653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.753 qpair failed and we were unable to recover it. 00:27:13.753 [2024-11-20 10:00:50.465820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.753 [2024-11-20 10:00:50.465853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.466043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.466090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.466292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.466373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.466568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.466615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.466843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.466894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.467131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.467182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.467387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.467437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.467632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.467688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.467903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.467954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.468156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.468207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.468447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.468485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.468627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.468660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.468841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.468891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.469052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.469106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.469344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.469407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.469613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.469666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.469842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.469894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.470060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.470112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.470352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.470406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.470608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.470662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.470908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.470961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.471213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.471265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.471498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.471551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.471758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.471812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.471993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.472048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.472292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.472356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.472600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.472634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.472758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.472794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.472997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.473050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.473263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.473326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.473534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.473586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.473696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.473729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.473903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.473956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.474171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.474221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.474431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.474484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.474724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.474776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.474949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.475002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.475118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.754 [2024-11-20 10:00:50.475150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.754 qpair failed and we were unable to recover it. 00:27:13.754 [2024-11-20 10:00:50.475246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.475279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.475466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.475525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.475759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.475810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.476006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.476065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.476316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.476369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.476575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.476609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.476753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.476786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.476995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.477045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.477308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.477343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.477489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.477527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.477682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.477734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.477970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.478023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.478250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.478286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.478411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.478445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.478653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.478706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.478942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.478993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.479201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.479257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.479483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.479537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.479781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.479833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.480073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.480124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.480338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.480394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.480632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.480685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.480923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.480975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.481160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.481212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.481443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.481510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.481733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.481785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.481989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.482041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.482278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.482340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.482492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.482544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.482774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.482827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.483033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.483085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.483256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.483319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.483515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.483567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.483786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.483842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.484054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.484108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.484327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.484381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.484626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.484679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.484898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.755 [2024-11-20 10:00:50.484933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.755 qpair failed and we were unable to recover it. 00:27:13.755 [2024-11-20 10:00:50.485042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.485075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.485266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.485328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.485542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.485594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.485796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.485853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.486077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.486129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.486346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.486381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.486526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.486560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.486807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.486840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.486983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.487017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.487205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.487256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.487470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.487521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.487761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.487826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.488032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.488096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.488344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.488402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.488610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.488667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.488827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.488883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.489123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.489194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.489396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.489453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.489704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.489759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.489956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.490012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.490199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.490270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.490557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.490593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.490730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.490764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.490913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.490970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.491219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.491275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.491560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.491617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.491805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.491864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.492097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.492153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.492342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.492402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.492620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.492676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.492897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.492952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.493171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.493227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.493442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.493510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.493747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.493803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.494000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.494050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.494233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.494289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.494509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.494571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.756 [2024-11-20 10:00:50.494796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.756 [2024-11-20 10:00:50.494852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.756 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.495047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.495104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.495327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.495384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.495598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.495655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.495940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.495997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.496217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.496273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.496468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.496525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.496707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.496767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.496982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.497038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.497285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.497354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.497606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.497663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.497922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.497978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.498163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.498219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.498414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.498470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.498680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.498745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.498996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.499052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.499266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.499355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.499543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.499600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.499818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.499873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.500088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.500143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.500380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.500438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.500615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.500673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.500865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.500921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.501089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.501144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.501299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.501367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.501567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.501623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.501782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.501837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.502053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.502111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.502377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.502435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.502649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.502705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.502891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.502949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.503168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.503224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.503449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.503505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.503731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.503786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.504034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.504092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.504301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.504368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.504626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.504682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.757 [2024-11-20 10:00:50.504928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.757 [2024-11-20 10:00:50.504985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.757 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.505193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.505248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.505456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.505512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.505751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.505786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.505931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.505984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.506157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.506213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.506395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.506452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.506670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.506720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.506862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.506896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.507067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.507138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.507346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.507403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.507624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.507680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.507898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.507953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.508202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.508257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.508440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.508497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.508681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.508739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.508953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.509010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.509226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.509293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.509493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.509549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.509805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.509861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.510067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.510123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.510332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.510389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.510652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.510709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.510923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.510980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.511232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.511288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.511609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.511643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.511787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.511821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.512085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.512158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.512430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.512505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.512741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.512816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.513008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.513063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.513292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.513359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.513639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.513713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.513984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.514058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.758 [2024-11-20 10:00:50.514237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.758 [2024-11-20 10:00:50.514293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.758 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.514513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.514590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.514814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.514870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.515121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.515176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.515477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.515552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.515765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.515841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.516111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.516167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.516412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.516486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.516691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.516766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.517006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.517081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.517269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.517338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.517587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.517662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.517918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.517952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.518096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.518129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.518333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.518392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.518530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.518565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.518788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.518862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.519071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.519129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.519348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.519406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.519633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.519688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.519905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.519963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.520188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.520245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.520480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.520537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.520792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.520875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.521094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.521153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.521360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.521395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.521541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.521575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.521786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.521860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.522073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.522128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.522367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.522402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.522571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.522604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.522815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.522849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.522959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.522994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.523134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.523169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.523441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.523518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.523758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.523832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.524008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.524064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.524328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.524386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.759 [2024-11-20 10:00:50.524631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.759 [2024-11-20 10:00:50.524689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.759 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.524978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.525051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.525314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.525372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.525651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.525685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.525804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.525839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.525954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.525987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.526143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.526199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.526392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.526468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.526720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.526796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.527034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.527090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.527315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.527372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.527565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.527646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.527863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.527897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.528050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.528084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.528313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.528372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.528616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.528689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.528929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.529002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.529226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.529282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.529520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.529594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.529782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.529855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.530075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.530149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.530430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.530505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.530790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.530863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.531058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.531114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.531290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.531382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.531633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.531699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.531984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.532058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.532275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.532344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.532551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.532632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.532908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.532981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.533188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.533243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.533500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.533575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.533775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.533849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.534027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.534082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.534258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.534328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.534515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.534571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.534777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.534811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.534980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.535014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.760 [2024-11-20 10:00:50.535228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.760 [2024-11-20 10:00:50.535262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.760 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.535410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.535445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.535578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.535611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.535872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.535906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.536034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.536068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.536248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.536323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.536525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.536581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.536828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.536883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.537040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.537097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.537214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.537248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.537458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.537515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.537749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.537823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.538045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.538100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.538356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.538412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.538684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.538758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.539025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.539082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.539295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.539362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.539599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.539676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.539973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.540047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.540264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.540331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.540611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.540687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.540924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.541000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.541216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.541272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.541526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.541599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.541869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.541942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.542167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.542222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.542474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.542549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.542836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.542925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.543180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.543236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.543547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.543621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.543823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.543900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.544085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.544140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.544349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.544407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.544640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.544721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.545018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.545092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.545335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.545411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.545630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.545687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.545922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.545979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.761 qpair failed and we were unable to recover it. 00:27:13.761 [2024-11-20 10:00:50.546234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.761 [2024-11-20 10:00:50.546290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.546593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.546666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.546952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.547026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.547237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.547293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.547569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.547644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.547932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.548006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.548234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.548290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.548583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.548658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.548908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.548983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.549234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.549291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.549529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.549603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.549886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.549961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.550182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.550238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.550495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.550569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.550858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.550932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.551152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.551208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.551467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.551503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.551642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.551676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.551959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.552032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.552263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.552332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.552635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.552708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.552939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.552973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.553122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.553156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.553324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.553381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.553603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.553678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.553951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.554025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.554279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.554346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.554553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.554629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.554872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.554907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.555048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.555109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.555349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.555406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.555584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.555641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.555908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.555983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.556206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.556263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.762 qpair failed and we were unable to recover it. 00:27:13.762 [2024-11-20 10:00:50.556509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.762 [2024-11-20 10:00:50.556583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.556834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.556908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.557160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.557216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.557473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.557547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.557783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.557855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.558080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.558135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.558392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.558469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.558733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.558807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.559044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.559117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.559337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.559395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.559675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.559749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.560040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.560113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.560326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.560385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.560504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.560539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.560728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.560804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.561017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.561075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.561279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.561347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.561599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.561675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.561942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.561975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.562092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.562126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.562346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.562403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.562637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.562712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.563024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.563109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.563292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.563376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.563595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.563676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.563943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.564009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.564270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.564372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.564613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.564679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.564988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.565052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.565329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.565364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.565506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.565571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.565820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.565884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.566182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.566246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.566543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.566599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.566930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.566994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.567214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.567248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.567429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.567479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.567693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.567748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.763 qpair failed and we were unable to recover it. 00:27:13.763 [2024-11-20 10:00:50.568038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.763 [2024-11-20 10:00:50.568102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.568391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.568447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.568649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.568712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.568994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.569057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.569326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.569400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.569605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.569660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.569891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.569956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.570203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.570266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.570532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.570590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.570851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.570885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.571009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.571041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.571228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.571293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.571503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.571559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.571759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.571824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.572112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.572178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.572430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.572464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.572610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.572643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.572895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.572959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.573202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.573266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.573518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.573574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.573886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.573919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.574170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.574234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.574526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.574582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.574780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.574835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.575093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.575156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.575440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.575497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.575704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.575759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.576043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.576107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.576373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.576429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.576631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.576686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.576924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.576989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.577228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.577292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.577594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.577649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.577943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.578007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.578284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.578375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.578552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.578609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.578867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.578931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.579217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.764 [2024-11-20 10:00:50.579280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.764 qpair failed and we were unable to recover it. 00:27:13.764 [2024-11-20 10:00:50.579615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.579691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.579986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.580050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.580323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.580390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.580588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.580652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.580934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.580997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.581291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.581371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.581574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.581637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.581894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.581957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.582213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.582277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.582558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.582622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.582862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.582925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.583160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.583224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.583548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.583614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.583851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.583914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.584214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.584277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.584564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.584628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.584913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.584976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.585220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.585283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.585570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.585634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.585868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.585932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.586199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.586262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.586566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.586637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.586899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.586962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.587262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.587360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.587617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.587680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.587929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.587992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.588217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.588561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.588635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.588891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.588954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.589200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.589267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.589499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.589565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.589810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.589874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.590165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.590229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.590487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.590554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.590844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.590908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.591185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.591248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.591513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.591578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.591867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.591931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.765 [2024-11-20 10:00:50.592177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.765 [2024-11-20 10:00:50.592239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.765 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.592538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.592603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.592862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.592929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.593198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.593263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.593583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.593647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.593883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.593949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.594199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.594263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.594526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.594590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.594828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.594893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.595114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.595176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.595466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.595530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.595767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.595830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.596111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.596175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.596388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.596453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.596734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.596798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.597081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.597145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.597412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.597476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.597727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.597791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.598053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.598118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.598344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.598407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.598653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.598716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.598914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.598977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.599218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.599282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.599585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.599649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.599929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.599993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.600236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.600300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.600602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.600666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.601025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.601327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.601392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.601667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.601731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.602013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.602088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.602381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.602447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.602747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.602811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.603072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.603136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.603412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.603478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.603733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.603796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.604080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.604144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.604436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.604502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.766 [2024-11-20 10:00:50.604741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.766 [2024-11-20 10:00:50.604804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.766 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.605086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.605150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.605401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.605466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.605737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.605800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.606042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.606107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.606345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.606411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.606677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.606743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.607042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.607106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.607354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.607421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.607710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.607774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.608016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.608079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.608332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.608396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.608690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.608754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.609004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.609070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.609266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.609343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.609567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.609630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.609915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.609979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.610214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.610277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.610578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.610642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.610878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.610953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.611241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.611333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.611576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.611640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.611890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.611953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.612201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.612267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.612587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.612659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.612871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.612936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.613231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.613319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.613528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.613592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.613856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.613920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.614184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.614248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.614550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.614624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.614873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.614937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.615186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.615251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.615538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.615603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.767 qpair failed and we were unable to recover it. 00:27:13.767 [2024-11-20 10:00:50.615854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.767 [2024-11-20 10:00:50.615919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.616166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.616233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.616494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.616558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.616784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.616849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.617045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.617112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.617406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.617472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.617760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.617824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.618112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.618178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.618426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.618491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.618738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.768 [2024-11-20 10:00:50.618802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:13.768 qpair failed and we were unable to recover it. 00:27:13.768 [2024-11-20 10:00:50.619050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.619114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.619405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.619470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.619699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.619776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.620052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.620117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.620377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.620445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.620680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.620746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.620986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.621050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.621294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.621375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.621649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.621713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.621926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.621990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.622211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.622275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.622582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.622649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.622916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.622980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.623239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.623326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.623554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.623618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.623880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.623945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.624199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.624263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.624537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.624602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.624853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.624917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.625114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.625176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.625468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.625535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.625819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.625891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.626112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.626176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.626415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.626481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.626765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.626830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.627049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.627111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.627361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.627429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.627684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.627748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.628005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.628070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.629625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.629669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.629928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.629981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.630117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.630154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.630368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.630400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.630550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.630607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.061 [2024-11-20 10:00:50.630804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.061 [2024-11-20 10:00:50.630864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.061 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.630969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.630999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.631125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.631155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.631317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.631347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.631439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.631469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.631596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.631656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.631813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.631843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.631955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.631985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.632120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.632159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.632262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.632292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.632500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.632557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.632738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.632794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.632907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.632937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.633046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.633077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.633198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.633232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.633342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.633372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.633462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.633492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.633598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.633635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.633726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.633755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.633852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.633882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.633979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.634012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.634145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.634175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.634320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.634351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.634489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.634518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.634627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.634657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.634767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.634798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.634932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.634973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.635075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.635104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.635236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.635295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.635555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.635625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.635849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.635916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.636215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.636246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.636370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.636402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.636512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.636543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.636673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.636704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.636795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.636826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.636965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.636996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.637153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.637183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.062 [2024-11-20 10:00:50.637289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.062 [2024-11-20 10:00:50.637326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.062 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.637460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.637491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.637662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.637732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.637994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.638061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.638261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.638300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.638450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.638487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.638621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.638660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.638800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.638850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.638989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.639042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.639243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.639273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.639428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.639478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.639580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.639610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.639739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.639769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.639899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.639931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.640142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.640176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.640279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.640327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.640421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.640450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.640587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.640627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.640760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.640789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.640915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.640944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.641105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.641134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.641269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.641313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.641521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.641552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.641714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.641744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.641848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.641879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.641993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.642023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.642156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.642185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.642327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.642359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.642477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.642525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.642696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.642726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.642845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.642874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.643016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.643045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.643174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.643204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.643313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.643358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.643473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.643499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.643639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.643669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.643795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.643826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.643935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.643961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.644092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.063 [2024-11-20 10:00:50.644118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.063 qpair failed and we were unable to recover it. 00:27:14.063 [2024-11-20 10:00:50.644274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.644318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.644439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.644470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.644585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.644611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.644766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.644808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.644970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.645000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.645132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.645157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.645316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.645355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.645461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.645488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.645635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.645683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.645805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.645840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.646008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.646040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.646149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.646179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.646317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.646345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.646461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.646492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.646620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.646650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.646853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.646895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.647028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.647058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.647202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.647228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.647325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.647371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.647501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.647533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.647686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.647733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.647847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.647889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.648015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.648041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.648133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.648158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.648283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.648329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.648473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.648502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.648590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.648628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.648755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.648784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.649523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.649556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.649688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.649728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.650644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.650676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.650818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.650847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.651611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.651643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.651805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.651833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.651942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.651968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.652132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.652158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.652276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-11-20 10:00:50.652321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.064 qpair failed and we were unable to recover it. 00:27:14.064 [2024-11-20 10:00:50.652467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.652493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.652660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.652686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.652780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.652807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.652921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.652966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.653112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.653138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.653246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.653272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.653397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.653424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.653540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.653567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.653688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.653715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.653857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.653885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.653987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.654013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.654161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.654186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.654319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.654363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.654492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.654535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.654658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.654684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.654806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.654832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.654984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.655011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.655100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.655126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.655253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.655293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.655447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.655488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.655655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.655694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.655824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.655856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.655948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.655977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.656105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.656131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.656228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.656255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.656379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.656405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.656494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.656520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.656644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.656689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.656836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.656863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.656960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.656987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.657098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.657128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.657212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.657238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.657348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.065 [2024-11-20 10:00:50.657375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.065 qpair failed and we were unable to recover it. 00:27:14.065 [2024-11-20 10:00:50.657491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.657517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.657632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.657659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.657795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.657820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.657934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.657960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.658074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.658100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.658219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.658249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.658364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.658402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.658499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.658529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.658632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.658658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.658803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.658829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.658921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.658947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.659041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.659068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.659223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.659263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.659379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.659419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.659516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.659545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.659628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.659655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.659758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.659786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.659886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.659914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.659995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.660022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.660222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.660252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.660369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.660398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.660515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.660544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.660666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.660692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.660786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.660814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.660915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.660942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.661053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.661082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.661181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.661208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.661290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.661331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.661419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.661446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.661543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.661583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.661721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.661748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.661901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.661928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.662038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.662065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.662150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.662176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.662263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.662291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.662384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.662411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.662527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.662553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.662657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.066 [2024-11-20 10:00:50.662698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.066 qpair failed and we were unable to recover it. 00:27:14.066 [2024-11-20 10:00:50.662824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.662852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.662970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.662996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.663105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.663132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.663213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.663240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.663345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.663372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.663455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.663481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.663564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.663590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.663688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.663713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.663809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.663837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.663965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.664005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.664106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.664146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.664243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.664271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.664368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.664397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.664536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.664563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.664683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.664710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.664809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.664836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.664922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.664948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.665093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.665118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.665231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.665258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.665375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.665406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.665506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.665533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.665617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.665643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.665729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.665756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.665841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.665874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.665989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.666109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.666227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.666357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.666474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.666608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.666721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.666831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.666954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.666979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.667091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.667116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.667198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.667224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.667321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.667348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.667433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.667460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.667552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.067 [2024-11-20 10:00:50.667579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.067 qpair failed and we were unable to recover it. 00:27:14.067 [2024-11-20 10:00:50.667721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.667753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.667871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.667898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.667991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.668018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.668148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.668174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.668262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.668289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.668391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.668418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.668509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.668538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.668666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.668693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.668779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.668808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.668907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.668933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.669033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.669073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.669212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.669251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.669359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.669388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.669477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.669505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.669612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.669639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.669733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.669764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.669913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.669940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.670056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.670082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.670196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.670222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.670310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.670337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.670417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.670444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.670533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.670558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.670656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.670688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.670805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.670831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.670914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.670940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.671052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.671079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.671174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.671201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.671333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.671361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.671449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.671480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.671583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.671609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.671724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.671750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.671838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.671864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.671948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.671979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.672094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.672120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.672200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.672224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.672313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.672339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.672424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.672450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.672544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.068 [2024-11-20 10:00:50.672570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.068 qpair failed and we were unable to recover it. 00:27:14.068 [2024-11-20 10:00:50.672650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.672676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.672774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.672799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.672936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.672962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.673060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.673087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.673178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.673205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.673293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.673325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.673438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.673465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.673557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.673583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.673673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.673699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.673819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.673846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.673938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.673964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.674054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.674082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.674185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.674224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.674357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.674397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.674495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.674522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.674656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.674688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.674772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.674798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.674918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.674944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.675031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.675057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.675140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.675166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.675262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.675288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.675392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.675418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.675503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.675528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.675641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.675668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.675785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.675814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.675901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.675927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.676051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.676078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.676175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.676202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.676314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.676340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.676429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.676462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.676562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.676593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.676689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.676715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.676830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.676856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.676976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.677002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.677089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.677115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.677227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.677252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.677337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.677364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.677458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.069 [2024-11-20 10:00:50.677484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.069 qpair failed and we were unable to recover it. 00:27:14.069 [2024-11-20 10:00:50.677598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.677624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.677716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.677752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.677844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.677871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.677993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.678020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.678122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.678162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.678289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.678333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.678428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.678454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.678534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.678561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.678648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.678674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.678763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.678789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.678906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.678932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.679015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.679039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.679151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.679178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.679258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.679284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.679397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.679423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.679511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.679537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.679671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.679704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.679821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.679847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.679984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.680009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.680130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.680160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.680251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.680278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.680967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.680994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.681198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.681225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.681369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.681395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.681487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.681513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.681597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.681623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.681738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.681764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.681849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.681875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.681977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.682016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.682156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.682196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.682308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.682335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.682432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.682460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.682541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.682568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.070 [2024-11-20 10:00:50.682689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.070 [2024-11-20 10:00:50.682718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.070 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.682837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.682864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.682949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.682975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.683068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.683093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.683190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.683230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.683344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.683373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.683462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.683491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.683578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.683605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.683720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.683748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.683835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.683866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.683978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.684006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.684095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.684123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.684207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.684233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.684332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.684364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.684474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.684501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.684628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.684654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.684770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.684797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.684915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.684941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.685032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.685060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.685178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.685204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.685294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.685327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.685418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.685445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.685537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.685564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.685681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.685707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.685801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.685827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.685929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.685954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.686053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.686080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.686184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.686212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.686295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.686344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.686440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.686467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.686556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.686583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.686681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.686707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.686793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.686818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.686913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.686939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.687055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.687095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.687209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.687237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.687338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.687366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.687458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.687484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.071 [2024-11-20 10:00:50.687580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.071 [2024-11-20 10:00:50.687607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.071 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.687724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.687750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.687869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.687897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.688016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.688043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.688166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.688192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.688276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.688318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.688410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.688436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.688568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.688597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.688723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.688751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.688840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.688867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.689951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.689980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.690066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.690092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.690180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.690209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.690301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.690333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.690417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.690443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.690537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.690563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.690681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.690707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.690823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.690849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.690941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.690969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.691082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.691108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.691191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.691217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.691320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.691348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.691435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.691462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.691551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.691577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.691670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.691698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.691808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.691834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.691918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.691944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.692040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.692069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.692155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.692183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.692279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.692314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.072 [2024-11-20 10:00:50.692405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.072 [2024-11-20 10:00:50.692432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.072 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.692515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.692542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.692635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.692661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.692772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.692801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.692882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.692912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.693001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.693026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.693112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.693138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.693225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.693251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.693369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.693397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.693475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.693501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.693627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.693652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.693744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.693770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.693857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.693884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.694024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.694050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.694131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.694157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.694243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.694269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.694405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.694432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.694514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.694540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.694632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.694659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.694773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.694798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.694937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.694963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.695102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.695129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.695256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.695282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.695416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.695447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.695540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.695568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.695670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.695696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.695803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.695828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.695913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.695940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.696058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.696084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.696198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.696230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.696343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.696369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.696447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.696478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.696563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.696588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.696684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.696711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.696823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.696850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.696958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.696984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.697071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.697097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.697185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.697211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.073 [2024-11-20 10:00:50.697296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.073 [2024-11-20 10:00:50.697327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.073 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.697422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.697449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.697553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.697579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.697655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.697681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.697831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.697860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.697954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.697980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.698080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.698120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.698223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.698249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.698346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.698373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.698458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.698484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.698576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.698612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.698734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.698760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.698878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.698904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.699041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.699067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.699154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.699181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.699276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.699325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.699408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.699433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.699544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.699570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.699657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.699683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.699844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.699870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.699978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.700009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.700083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.700109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.700235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.700261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.700397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.700423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.700518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.700543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.700669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.700695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.700820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.700846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.700964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.700990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.701088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.701114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.701197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.701224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.701344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.701383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.701492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.701520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.701623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.701673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.701793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.701820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.074 qpair failed and we were unable to recover it. 00:27:14.074 [2024-11-20 10:00:50.701979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.074 [2024-11-20 10:00:50.702004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.702092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.702117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.702195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.702220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.702343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.702370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.702491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.702517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.702602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.702628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.702756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.702781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.702895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.702922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.703035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.703061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.703151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.703176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.703309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.703343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.703426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.703452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.703548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.703573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.703687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.703717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.703837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.703864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.703947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.703973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.704069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.704098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.704189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.704216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.705154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.705183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.705376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.705404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.706129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.706159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.706301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.706334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.707019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.707049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.707195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.707222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.707360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.707388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.707478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.707504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.707618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.707644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.707739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.707766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.707854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.707880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.707992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.708024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.708122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.708148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.709004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.709036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.709178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.709213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.709926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.709957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.710101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.710128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.710258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.710284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.710388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.710414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.710522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.075 [2024-11-20 10:00:50.710549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.075 qpair failed and we were unable to recover it. 00:27:14.075 [2024-11-20 10:00:50.710652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.710679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.710813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.710839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.710958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.710989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.711096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.711121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.711227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.711253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.711367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.711395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.711481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.711507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.711629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.711655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.711792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.711818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.712806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.712845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.712997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.713027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.713177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.713204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.713310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.713338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.713440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.713467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.713561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.713587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.713699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.713727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.713829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.713857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.713956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.713983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.714100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.714126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.714242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.714269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.714362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.714390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.714501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.714528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.714645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.714675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.715375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.715407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.715508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.715536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.715682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.715710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.715817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.715863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.716022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.716053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.716181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.716210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.716402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.716447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.716566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.716605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.716729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.716757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.716916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.716961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.717075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.717100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.717197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.717223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.717338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.717365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.717495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.717521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.717635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.076 [2024-11-20 10:00:50.717661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.076 qpair failed and we were unable to recover it. 00:27:14.076 [2024-11-20 10:00:50.717805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.717833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.717981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.718020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.718116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.718144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.718241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.718271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.718371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.718403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.718522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.718549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.718643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.718669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.718772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.718799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.718910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.718938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.719053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.719080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.719169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.719196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.719317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.719345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.719430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.719457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.719547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.719575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.719680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.719707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.719824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.719851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.719933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.719959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.720077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.720104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.720206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.720234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.720345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.720384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.720486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.720515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.720615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.720643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.720758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.720784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.720896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.720922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.721009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.721035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.721780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.721810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.721986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.722014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.722161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.722187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.722313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.722340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.722437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.722463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.722558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.722585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.722692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.722720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.722878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.722927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.723057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.723112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.723218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.723253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.723371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.723399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.723487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.723513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.077 [2024-11-20 10:00:50.723610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.077 [2024-11-20 10:00:50.723645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.077 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.723775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.723800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.723939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.723973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.724099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.724126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.724270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.724295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.724414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.724444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.724535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.724561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.724659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.724703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.724824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.724851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.724964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.724992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.725163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.725194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.725319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.725355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.725444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.725471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.725586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.725612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.725761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.725791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.725917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.725948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.726095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.726126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.726248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.726274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.726387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.726426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.726533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.726572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.726698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.726726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.726812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.726838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.726941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.726971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.727135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.727183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.727274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.727300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.727404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.727430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.727512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.727537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.727631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.727658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.727750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.727776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.727868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.727897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.727995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.728024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.728143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.728169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.728269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.728296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.728394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.728420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.728529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.728567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.728688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.728739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.728852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.728884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.729025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.729078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.078 [2024-11-20 10:00:50.729180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.078 [2024-11-20 10:00:50.729225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.078 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.729364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.729392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.729491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.729519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.729603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.729630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.729770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.729796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.729892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.729921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.730646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.730677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.730841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.730872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.730993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.731024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.731136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.731169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.731319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.731346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.731432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.731459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.731553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.731581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.731665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.731690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.731809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.731835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.731931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.731956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.732063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.732088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.732219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.732260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.732355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.732386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.732518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.732545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.732670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.732697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.732810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.732837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.732913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.732939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.733345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.733374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.733477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.733505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.733635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.733663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.733800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.733829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.734089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.734115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.734228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.734254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.734360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.734387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.734477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.734503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.734592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.734621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.734743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.734768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.734883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.734908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.735017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.079 [2024-11-20 10:00:50.735042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.079 qpair failed and we were unable to recover it. 00:27:14.079 [2024-11-20 10:00:50.735147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.735188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.735294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.735345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.735444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.735473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.735563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.735590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.735676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.735703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.735817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.735850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.735967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.735994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.736092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.736118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.736267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.736325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.736424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.736453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.736544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.736571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.736662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.736698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.736817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.736845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.736991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.737115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.737262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.737393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.737503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.737621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.737753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.737863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.737969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.737996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.738083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.738110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.738248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.738274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.738387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.738414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.738514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.738540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.738627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.738663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.738766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.738792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.738915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.738943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.739063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.739089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.739205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.739231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.739341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.739381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.739478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.739507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.739598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.739625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.739738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.739764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.739886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.739926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.740056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.080 [2024-11-20 10:00:50.740085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.080 qpair failed and we were unable to recover it. 00:27:14.080 [2024-11-20 10:00:50.740179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.740206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.740333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.740361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.740455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.740482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.740576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.740614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.740712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.740737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.740889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.740915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.741010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.741035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.741182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.741208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.741323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.741350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.741447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.741475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.741621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.741660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.741778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.741806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.741913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.741940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.742036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.742063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.742149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.742175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.742290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.742324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.742419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.742446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.742537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.742562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.742679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.742705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.742812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.742838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.742918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.742943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.743037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.743062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.743179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.743219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.743342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.743383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.743488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.743515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.743631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.743657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.743738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.743764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.743847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.743874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.743982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.744010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.744134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.744159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.744266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.744319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.744421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.744455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.744545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.744571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.744692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.744717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.744798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.744826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.744917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.744944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.745058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.745084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.745197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.081 [2024-11-20 10:00:50.745224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.081 qpair failed and we were unable to recover it. 00:27:14.081 [2024-11-20 10:00:50.745371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.745411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.745512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.745541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.745641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.745670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.745759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.745786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.745933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.745959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.746078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.746106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.746224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.746249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.746340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.746367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.746461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.746487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.746571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.746597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.746681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.746705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.746840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.746866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.747002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.747030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.747134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.747172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.747295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.747332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.747425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.747453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.747542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.747568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.747693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.747720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.747857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.747884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.748002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.748030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.748194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.748234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.748337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.748375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.748465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.748490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.748568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.748604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.748727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.748753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.748857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.748884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.748975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.749004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.749138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.749178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.749263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.749291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.749392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.749421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.749539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.749565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.749664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.749691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.749783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.749810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.749934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.749963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.750059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.750086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.750177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.750205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.750318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.750345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.750457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.082 [2024-11-20 10:00:50.750484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.082 qpair failed and we were unable to recover it. 00:27:14.082 [2024-11-20 10:00:50.750609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.750634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.750781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.750807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.750916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.750943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.751038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.751067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.751148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.751176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.751270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.751329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.751427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.751455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.751541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.751568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.751720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.751750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.751903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.751951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.752046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.752076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.752217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.752264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.752415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.752455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.752553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.752580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.752756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.752809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.752944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.752989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.753133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.753183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.753284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.753326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.753462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.753488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.753576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.753620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.753814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.753862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.753977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.754027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.754112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.754146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.754292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.754342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.754484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.754524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.754623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.754651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.754760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.754806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.754998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.755044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.755159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.755184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.755333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.755362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.755477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.755503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.755628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.755654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.755740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.755766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.755880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.755906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.756053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.756082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.083 [2024-11-20 10:00:50.756196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.083 [2024-11-20 10:00:50.756222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.083 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.756348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.756375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.756486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.756512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.756593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.756628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.756708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.756734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.756813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.756839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.756958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.756984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.757079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.757105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.757192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.757218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.757359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.757392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.757479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.757506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.757580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.757605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.757691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.757718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.757829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.757855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.757982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.758012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.758099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.758123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.758239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.758265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.758375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.758416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.758542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.758570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.758684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.758711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.758797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.758824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.758969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.758995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.759107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.759133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.759250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.759277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.759398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.759437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.759543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.759583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.759702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.759749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.759890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.759938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.760092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.760161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.760313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.760340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.760428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.760454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.760538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.760565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.760680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.760706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.760822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.760849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.760960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.760986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.761084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.761110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.761191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.761217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.084 [2024-11-20 10:00:50.761342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.084 [2024-11-20 10:00:50.761369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.084 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.761455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.761484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.761639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.761667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.761797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.761823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.761941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.762071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.762097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.762199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.762238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.762385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.762413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.762500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.762529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.762669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.762699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.762830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.762878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.763028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.763077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.763208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.763235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.763339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.763366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.763451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.763478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.763593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.763641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.763782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.763825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.763962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.764012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.764109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.764135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.764229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.764255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.764376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.764403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.764517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.764544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.764675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.764701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.764813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.764839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.764959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.764987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.765076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.765102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.765215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.765242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.765340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.765368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.765450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.765476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.765570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.765597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.765707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.765734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.765826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.765853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.765988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.766028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.766125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.766153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.766229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.766256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.766351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.766384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.766496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.766523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.766666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.085 [2024-11-20 10:00:50.766693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.085 qpair failed and we were unable to recover it. 00:27:14.085 [2024-11-20 10:00:50.766804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.766830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.766972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.767002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.767103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.767129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.767224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.767251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.767375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.767403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.767491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.767517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.767617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.767657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.767831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.767877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.767991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.768021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.768124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.768157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.768269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.768296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.768396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.768421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.768511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.768539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.768683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.768714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.768908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.768959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.769097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.769128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.769247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.769273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.769415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.769441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.769556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.769583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.769706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.769738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.769899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.769930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.770026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.770056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.770165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.770196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.770327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.770354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.770449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.770476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.770579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.770629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.770756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.770785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.770861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.770889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.771049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.771079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.771204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.771234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.771392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.771421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.771515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.771543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.771625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.771652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.771774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.771801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.771917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.771962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.772085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.772111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.772246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.772271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.772371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.086 [2024-11-20 10:00:50.772400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.086 qpair failed and we were unable to recover it. 00:27:14.086 [2024-11-20 10:00:50.772480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.772507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.772591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.772617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.772720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.772753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.772856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.772887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.773016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.773046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.773163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.773190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.773317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.773344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.773424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.773450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.773587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.773623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.773752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.773783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.773873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.773903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.774057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.774087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.774194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.774224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.774377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.774404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.774512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.774538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.774656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.774683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.774815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.774844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.774939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.774970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.775071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.775102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.775233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.775264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.775403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.775443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.775562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.775590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.775731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.775777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.775937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.775982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.776081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.776108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.776197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.776223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.776316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.776344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.776466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.776492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.776587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.776613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.776703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.776729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.776819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.776846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.777005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.777034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.777137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.777163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.777243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.777269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.777412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.777453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.777561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.777594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.777700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.777726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.087 [2024-11-20 10:00:50.777835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.087 [2024-11-20 10:00:50.777866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.087 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.778016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.778063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.778178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.778204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.778321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.778369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.778530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.778560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.778667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.778697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.778864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.778911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.779007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.779037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.779141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.779171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.779276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.779324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.779439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.779465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.779577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.779608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.779711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.779740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.779872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.779904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.780003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.780032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.780187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.780215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.780328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.780368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.780490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.780518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.780648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.780691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.780856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.780887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.780985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.781017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.781125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.781169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.781287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.781320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.781462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.781488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.781588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.781636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.781763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.781807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.781939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.781970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.782102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.782134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.782286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.782333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.782458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.782486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.782597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.782643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.782766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.782796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.782953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.782998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.783095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.088 [2024-11-20 10:00:50.783122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.088 qpair failed and we were unable to recover it. 00:27:14.088 [2024-11-20 10:00:50.783210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.783236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.783358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.783385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.783488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.783530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.783674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.783701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.783825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.783852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.784000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.784040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.784159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.784204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.784383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.784413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.784502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.784528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.784693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.784720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.784834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.784866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.784978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.785021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.785123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.785152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.785263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.785294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.785442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.785467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.785582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.785609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.785711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.785742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.785906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.785942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.786080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.786114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.786216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.786246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.786356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.786383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.786476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.786502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.786643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.786669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.786775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.786823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.786992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.787042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.787151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.787194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.787287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.787326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.787413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.787439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.787557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.787601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.787726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.787770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.787925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.787955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.788091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.788121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.788291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.788358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.788488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.788516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.788604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.788631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.788718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.788765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.788902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.788948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.089 [2024-11-20 10:00:50.789076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.089 [2024-11-20 10:00:50.789107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.089 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.789230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.789260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.789390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.789421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.789534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.789579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.789690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.789735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.789847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.789878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.790018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.790063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.790191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.790218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.790341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.790436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.790462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.790591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.790617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.790733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.790759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.790884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.790910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.791033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.791059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.791146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.791173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.791316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.791343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.791486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.791600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.791627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.791718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.791744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.791835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.791862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.791958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.792003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.792115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.792144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.792288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.792339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.792432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.792460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.792580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.792613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.792693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.792720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.792874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.792905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.793030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.793060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.793177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.793227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.793374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.793408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.793540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.793571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.793698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.793731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.793897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.793928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.794026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.794056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.794188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.794219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.794350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.794377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.794487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.794513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.794607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.090 [2024-11-20 10:00:50.794635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.090 qpair failed and we were unable to recover it. 00:27:14.090 [2024-11-20 10:00:50.794758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.794788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.794887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.794918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.795020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.795050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.795190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.795216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.795361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.795390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.795471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.795497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.795592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.795620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.795733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.795779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.795921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.795967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.796060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.796088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.796208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.796233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.796348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.796381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.796526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.796577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.796720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.796752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.796919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.796965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.797095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.797126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.797254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.797285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.797396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.797423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.797510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.797537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.797656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.797700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.797822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.797852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.798000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.798050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.798197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.798229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.798350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.798376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.798494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.798521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.798631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.798660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.798764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.798795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.798910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.798942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.799060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.799086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.799206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.799233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.799354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.799382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.799470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.799496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.799618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.799645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.799728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.799756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.799889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.799919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.800053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.800099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.800212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.800243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.091 qpair failed and we were unable to recover it. 00:27:14.091 [2024-11-20 10:00:50.800355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.091 [2024-11-20 10:00:50.800382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.800475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.800502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.800598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.800641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.800738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.800768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.800904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.800933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.801026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.801058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.801162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.801192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.801339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.801378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.801486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.801514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.801603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.801649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.801752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.801784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.801914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.801943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.802049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.802080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.802205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.802236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.802368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.802395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.802484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.802510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.802627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.802653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.802786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.802816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.802909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.802940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.803045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.803078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.803256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.803287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.803403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.803430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.803529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.803555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.803663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.803700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.803858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.803904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.804038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.804088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.804214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.804254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.804368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.804399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.804536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.804588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.804747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.804796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.804942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.804990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.805118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.805150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.805282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.805315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.805413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.805439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.805573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.805626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.805731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.805761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.805886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.805932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.806024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.092 [2024-11-20 10:00:50.806051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.092 qpair failed and we were unable to recover it. 00:27:14.092 [2024-11-20 10:00:50.806149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.806178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.806325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.806352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.806446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.806472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.806556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.806583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.806697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.806724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.806858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.806885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.807057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.807104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.807206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.807243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.807376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.807418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.807532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.807565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.807683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.807710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.807867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.807914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.808037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.808086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.808184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.808215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.808371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.808399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.808538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.808588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.808722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.808769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.808902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.808945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.809079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.809120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.809245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.809271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.809384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.809417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.809548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.809579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.809699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.809729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.809864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.809912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.810040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.810071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.810206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.810236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.810350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.810377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.810495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.810527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.810620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.810647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.810759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.810789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.810923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.810955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.811091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.811135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.811251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.811278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.811393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.811420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.093 [2024-11-20 10:00:50.811506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.093 [2024-11-20 10:00:50.811549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.093 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.811682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.811712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.811807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.811838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.811935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.811967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.812138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.812171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.812328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.812368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.812471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.812530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.812667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.812699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.812830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.812861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.812992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.813022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.813160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.813190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.813330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.813357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.813470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.813497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.813597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.813627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.813745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.813788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.813919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.813951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.814083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.814116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.814255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.814282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.814376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.814403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.814523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.814549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.814645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.814685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.814874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.814921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.815096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.815128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.815254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.815284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.815420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.815448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.815560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.815592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.815731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.815777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.815931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.815999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.816116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.816144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.816267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.816297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.816397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.816424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.816538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.816565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.816733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.816780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.816874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.816913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.817034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.817082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.817248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.817276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.817379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.817406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.817524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.817552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.094 qpair failed and we were unable to recover it. 00:27:14.094 [2024-11-20 10:00:50.817647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.094 [2024-11-20 10:00:50.817674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.817832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.817879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.818019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.818065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.818213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.818241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.818356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.818382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.818507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.818534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.818673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.818720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.818837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.818883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.819011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.819056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.819210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.819242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.819362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.819398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.819517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.819543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.819637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.819663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.819762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.819792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.819939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.819966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.820133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.820163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.820264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.820294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.820424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.820451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.820531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.820557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.820711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.820741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.820899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.820930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.821060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.821093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.821198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.821230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.821414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.821454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.821547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.821575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.821741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.821787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.821910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.821956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.822066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.822119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.822236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.822264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.822367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.822396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.822511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.822538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.822650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.822677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.822769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.822796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.822889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.822917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.823033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.823061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.823184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.823215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.823319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.823348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.823463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.095 [2024-11-20 10:00:50.823490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.095 qpair failed and we were unable to recover it. 00:27:14.095 [2024-11-20 10:00:50.823604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.823633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.823719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.823765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.823902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.823934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.824069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.824100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.824240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.824271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.824407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.824434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.824558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.824587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.824720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.824764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.824875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.824922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.825025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.825055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.825229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.825269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.825384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.825413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.825506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.825533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.825671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.825703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.825830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.825862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.826014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.826063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.826174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.826201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.826286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.826319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.826400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.826427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.826517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.826545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.826643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.826675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.826785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.826817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.827024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.827055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.827213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.827244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.827385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.827425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.827562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.827609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.827756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.827801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.827899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.827926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.828010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.828036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.828124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.828150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.828261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.828287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.828412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.828438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.828545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.828573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.828671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.828699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.828814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.828840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.828952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.828987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.829077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.829107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.096 qpair failed and we were unable to recover it. 00:27:14.096 [2024-11-20 10:00:50.829189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.096 [2024-11-20 10:00:50.829220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.829344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.829371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.829483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.829510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.829613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.829640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.829725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.829751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.829871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.829897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.830018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.830044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.830156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.830182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.830274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.830308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.830419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.830450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.830555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.830581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.830740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.830770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.830976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.831006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.831159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.831186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.831320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.831368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.831544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.831575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.831713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.831744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.831902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.831950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.832073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.832103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.832217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.832244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.832333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.832359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.832479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.832505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.832613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.832644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.832832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.832879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.833012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.833042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.833168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.833199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.833361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.833401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.833514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.833554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.833651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.833679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.833850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.833881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.834073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.834105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.834211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.834243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.834415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.834444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.834532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.097 [2024-11-20 10:00:50.834560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.097 qpair failed and we were unable to recover it. 00:27:14.097 [2024-11-20 10:00:50.834677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.834703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.834849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.834896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.835076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.835107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.835241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.835271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.835400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.835427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.835515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.835542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.835635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.835678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.835844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.835886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.836032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.836062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.836184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.836215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.836355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.836396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.836532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.836570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.836687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.836735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.836869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.836914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.837055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.837081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.837169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.837196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.837278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.837323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.837412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.837438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.837534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.837561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.837681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.837710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.837806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.837833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.837950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.837977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.838064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.838102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.838224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.838264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.838363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.838391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.838560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.838590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.838718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.838769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.838929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.838979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.839095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.839123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.839211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.839239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.839372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.839399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.839484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.839510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.839619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.839667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.839808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.839855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.839953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.839985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.840112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.840156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.840265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.840291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.840412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.098 [2024-11-20 10:00:50.840439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.098 qpair failed and we were unable to recover it. 00:27:14.098 [2024-11-20 10:00:50.840544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.840570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.840697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.840724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.840893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.840936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.841065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.841095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.841203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.841233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.841366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.841393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.841509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.841537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.841652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.841678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.841797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.841832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.842021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.842051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.842193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.842223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.842358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.842385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.842503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.842529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.842642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.842668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.842842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.842872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.842982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.843008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.843119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.843149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.843276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.843342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.843472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.843499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.843613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.843658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.843804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.843856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.843962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.843992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.844101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.844130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.844283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.844323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.844463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.844490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.844606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.844632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.844806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.844836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.844965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.844995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.845127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.845155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.845284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.845325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.845437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.845463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.845550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.845577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.845686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.845717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.845897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.845926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.846035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.846200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.846229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.846345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.846371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.099 [2024-11-20 10:00:50.846487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.099 [2024-11-20 10:00:50.846513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.099 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.846597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.846638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.846749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.846791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.846944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.846973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.847097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.847127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.847240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.847267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.847361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.847388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.847505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.847530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.847631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.847661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.847782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.847811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.847937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.847967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.848133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.848191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.848332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.848361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.848501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.848545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.848661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.848706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.848818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.848866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.848981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.849007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.849123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.849149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.849255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.849281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.849388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.849428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.849575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.849603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.849716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.849742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.849832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.849858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.849953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.849981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.850068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.850094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.850228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.850255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.850358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.850384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.850480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.850505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.850695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.850720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.850795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.850820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.850957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.851001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.851096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.851124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.851268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.851297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.851427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.851454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.851558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.851602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.851754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.851792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.851912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.851942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.852048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.852075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.100 [2024-11-20 10:00:50.852161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.100 [2024-11-20 10:00:50.852193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.100 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.852315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.852351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.852487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.852531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.852648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.852697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.852837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.852882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.853073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.853098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.853207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.853233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.853367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.853412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.853518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.853550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.853677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.853717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.853889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.853939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.854076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.854103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.854218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.854244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.854374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.854405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.854542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.854572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.854694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.854725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.854854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.854884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.854983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.855028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.855145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.855170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.855262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.855288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.855400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.855446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.855532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.855558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.855781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.855831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.856016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.856065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.856205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.856231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.856325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.856352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.856451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.856480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.856643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.856673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.856818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.856861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.856976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.857001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.857119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.857144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.857218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.857244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.857378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.857424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.857531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.857563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.857691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.857721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.857839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.857865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.857951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.857978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.858088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.101 [2024-11-20 10:00:50.858114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.101 qpair failed and we were unable to recover it. 00:27:14.101 [2024-11-20 10:00:50.858192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.858219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.858368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.858414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.858549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.858589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.858724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.858755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.858879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.858909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.859062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.859092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.859222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.859253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.859410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.859440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.859539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.859569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.859661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.859691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.859795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.859825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.859950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.859980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.860118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.860150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.860293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.860325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.860420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.860448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.860532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.860561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.860725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.860776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.860940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.860991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.861090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.861120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.861276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.861310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.861451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.861477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.861664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.861718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.861828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.861855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.862018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.862070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.862216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.862246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.862360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.862387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.862529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.862556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.862730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.862756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.862908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.862942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.863065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.863095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.863252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.863282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.863403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.863429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.863516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.863560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.863720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.863750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.863848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.863878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.864012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.864042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.864139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.102 [2024-11-20 10:00:50.864169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.102 qpair failed and we were unable to recover it. 00:27:14.102 [2024-11-20 10:00:50.864283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.864317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.864402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.864428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.864543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.864569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.864771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.864801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.864926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.864956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.865079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.865114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.865280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.865354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.865446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.865474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.865593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.865620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.865749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.865778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.866014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.866065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.866179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.866204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.866292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.866324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.866442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.866468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.866552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.866578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.866768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.866794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.866904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.866931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.867048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.867074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.867189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.867215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.867306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.867333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.867416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.867441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.867552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.867578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.867695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.867722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.867797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.867822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.867908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.867933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.868026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.868051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.868161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.868187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.868317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.868357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.868479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.868507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.868629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.868656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.868797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.868823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.868932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.868958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.869111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.869137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.869229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.869257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.869403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.869448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.869578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.869621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.869766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.869816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.869948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.869992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.870081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.870108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.870244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.870284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.870464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.103 [2024-11-20 10:00:50.870497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.103 qpair failed and we were unable to recover it. 00:27:14.103 [2024-11-20 10:00:50.870643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.870706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.870873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.870925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.871098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.871162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.871254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.871298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.871445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.871505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.871634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.871665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.871774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.871806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.871942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.871972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.872063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.872093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.872223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.872253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.872415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.872442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.872557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.872597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.872745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.872769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.872937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.872967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.873108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.873138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.873228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.873272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.873369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.873396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.873537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.873563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.873713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.873744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.873930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.873961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.874085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.874116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.874251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.874282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.874424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.874450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.874567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.874609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.874779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.874806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.874946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.874976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.875105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.875134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.875290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.875326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.875457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.875483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.875624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.875668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.875821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.875851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.876010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.876042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.876165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.876195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.876318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.876371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.876516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.876542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.876715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.876765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.876891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.876922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.877021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.877047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.877219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.877249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.877394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.877421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.104 [2024-11-20 10:00:50.877543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.104 [2024-11-20 10:00:50.877569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.104 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.877691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.877719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.877843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.877873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.877989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.878015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.878128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.878167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.878291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.878329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.878465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.878491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.878584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.878609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.878753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.878783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.878895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.878921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.879033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.879063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.879184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.879214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.879318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.879345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.879452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.879478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.879605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.879636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.879812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.879842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.879942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.879971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.880100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.880130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.880273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.880301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.880451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.880477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.880674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.880727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.880888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.880919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.881028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.881058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.881187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.881218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.881354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.881381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.881520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.881546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.881702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.881762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.881871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.881898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.882017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.882043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.882156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.882182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.882307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.882334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.882456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.882499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.882589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.882619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.882754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.882784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.882878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.882908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.883012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.883042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.883135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.883166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.883350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.883381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.883515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.883545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.883707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.883737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.883854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.883885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.884019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.884049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.884141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.884172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.105 [2024-11-20 10:00:50.884314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.105 [2024-11-20 10:00:50.884345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.105 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.884505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.884540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.884683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.884713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.884834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.884864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.884991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.885021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.885115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.885145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.885280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.885331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.885432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.885462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.885562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.885592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.885726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.885756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.885856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.885886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.885985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.886015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.886111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.886141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.886263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.886293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.886431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.886461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.886593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.886623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.886734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.886764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.886921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.886951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.887082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.887112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.887263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.887292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.887432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.887462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.887553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.887583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.887712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.887742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.887837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.887869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.888026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.888056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.888188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.888217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.888362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.888392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.888546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.888576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.888674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.888705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.888858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.888888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.889012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.889042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.889198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.889228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.889352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.889383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.889537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.889567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.889668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.889698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.889852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.889882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.890039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.890069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.890234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.890263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.106 [2024-11-20 10:00:50.890448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.106 [2024-11-20 10:00:50.890507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.106 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.890716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.890773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.890900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.890932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.891042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.891077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.891208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.891237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.891351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.891423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.891549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.891608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.891826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.891879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.891976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.892005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.892133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.892165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.892318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.892349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.892444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.892475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.892662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.892719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.892842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.892872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.892997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.893027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.893162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.893192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.893298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.893350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.893530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.893590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.893686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.893717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.893836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.893866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.893963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.893993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.894121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.894151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.894275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.894312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.894435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.894464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.894592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.894622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.894752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.894782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.894920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.894950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.895050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.895080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.895213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.895244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.895334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.895365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.895478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.895508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.895639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.895669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.895796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.895826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.895927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.895957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.896112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.896142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.896297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.896334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.896459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.896488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.896617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.896647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.896737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.896767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.896878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.107 [2024-11-20 10:00:50.896908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.107 qpair failed and we were unable to recover it. 00:27:14.107 [2024-11-20 10:00:50.897047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.897076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.897178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.897209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.897330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.897361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.897513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.897548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.897648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.897678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.897775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.897805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.897948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.897978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.898112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.898142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.898279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.898316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.898410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.898440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.898669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.898731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.898967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.899023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.899180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.899210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.899362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.899427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.899578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.899649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.899880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.899933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.900032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.900063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.900203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.900233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.900353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.900384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.900518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.900547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.900710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.900739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.900865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.900896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.901004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.901035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.901140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.901170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.901271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.901308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.901440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.901470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.901596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.901626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.901717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.901747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.901869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.901899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.902026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.902057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.902163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.902193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.902312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.902342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.902449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.902479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.902580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.902610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.902741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.902771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.902924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.902954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.903078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.903108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.903239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.903269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.903384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.903415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.903515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.903545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.903673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.903703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.903826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.903856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.903953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.903984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.904078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.904114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.904214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.904244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.904359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.904390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.904489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.108 [2024-11-20 10:00:50.904519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.108 qpair failed and we were unable to recover it. 00:27:14.108 [2024-11-20 10:00:50.904679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.904709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.904835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.904865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.904963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.904994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.905130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.905161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.905290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.905336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.905490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.905521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.905614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.905646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.905731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.905761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.905861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.905891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.905997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.906027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.906160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.906191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.906294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.906331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.906465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.906494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.906599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.906629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.906785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.906815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.906933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.906963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.907118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.907148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.907270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.907300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.907436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.907465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.907577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.907607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.907732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.907762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.907891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.907921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.908046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.908076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.908224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.908254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.908388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.908418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.908583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.908613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.908715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.908745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.908885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.908915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.909013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.909043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.909139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.909168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.909324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.909355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.909527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.909593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.909682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.909712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.909866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.909897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.910052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.910082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.910170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.910200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.910301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.910346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.910490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.910545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.910672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.910702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.910800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.910831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.910916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.910946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.911037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.911067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.911162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.911192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.911280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.911317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.911474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.911504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.911600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.911631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.911728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.911759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.109 [2024-11-20 10:00:50.911919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.109 [2024-11-20 10:00:50.911949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.109 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.912108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.912138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.912273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.912308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.912546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.912597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.912785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.912848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.913005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.913035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.913130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.913159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.913271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.913310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.913463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.913526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.913732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.913787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.913920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.913949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.914066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.914095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.914194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.914224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.914325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.914356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.914481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.914510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.914668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.914698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.914809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.914839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.914943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.914973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.915105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.915135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.915263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.915292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.915420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.915450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.915548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.915578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.915734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.915764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.915861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.915890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.915996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.916026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.916129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.916159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.916297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.916334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.916455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.916485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.916591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.916622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.916754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.916790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.916922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.916953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.917040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.917070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.917192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.917222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.917346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.917377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.917474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.917504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.917646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.917676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.110 qpair failed and we were unable to recover it. 00:27:14.110 [2024-11-20 10:00:50.917777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.110 [2024-11-20 10:00:50.917807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.917891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.917921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.918048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.918079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.918178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.918208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.918315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.918345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.918458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.918488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.918576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.918606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.918706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.918737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.918861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.918891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.919048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.919079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.919178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.919207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.919310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.919341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.919444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.919476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.919606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.919636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.919766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.919796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.919919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.919950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.920082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.920112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.920243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.920273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.920369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.920400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.920530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.920559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.920682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.920728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.920832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.920862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.920989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.921019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.921142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.921172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.921266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.921296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.921512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.921576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.921724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.921806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.922093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.922157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.922364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.922395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.922650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.922716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.923006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.923039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.923141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.923174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.923318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.923348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.923471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.923533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.923781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.923846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.924088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.924152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.924342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.924392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.924514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.924543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.924797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.924860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.925157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.925221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.925443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.925472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.925575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.925604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.925726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.925790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.926082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.926146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.926361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.926391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.926522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.926551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.926825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.926888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.927174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.927248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.927484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.111 [2024-11-20 10:00:50.927514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.111 qpair failed and we were unable to recover it. 00:27:14.111 [2024-11-20 10:00:50.927761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.927825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.928110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.928178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.928419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.928449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.928582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.928631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.928783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.928813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.928951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.929010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.929245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.929325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.929468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.929497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.929604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.929634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.929742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.929772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.929962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.930027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.930349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.930379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.930486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.930517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.930684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.930714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.930938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.931001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.931222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.931251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.931373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.931403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.931524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.931554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.931795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.931859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.932151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.932215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.932453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.932483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.932607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.932637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.932787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.932817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.932997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.933061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.933256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.933349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.933503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.933538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.933746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.933809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.934042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.934104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.934298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.934337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.934468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.934498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.934716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.934746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.934943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.935007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.935210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.935274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.935494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.935523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.935696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.935759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.936009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.936076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.936367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.936398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.936497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.936527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.936709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.936774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.937024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.937088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.937271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.937301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.937413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.937442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.937571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.937601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.937783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.937812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.937938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.937967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.938085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.938115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.938256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.938310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.938454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.938486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.112 [2024-11-20 10:00:50.938695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.112 [2024-11-20 10:00:50.938726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.112 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-20 10:00:50.938934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-20 10:00:50.938995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-20 10:00:50.939132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-20 10:00:50.939162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-20 10:00:50.939285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-20 10:00:50.939322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.113 [2024-11-20 10:00:50.939423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.113 [2024-11-20 10:00:50.939460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.113 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.939599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.939629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.939726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.939756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.939844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.939875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.940007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.940037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.940164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.940194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.940335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.940366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.940493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.940523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.940630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.940660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.940786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.940818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.940917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.940948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.941057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.941089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.941192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.941228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.941355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.941385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.941537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.941568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.941705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.941736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.941835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.941865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.941993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.942024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.942154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.942187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.942317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.942348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.942482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.942512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.942655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.942720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.942957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.942986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.943111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.943141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.943339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.943372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.943574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.395 [2024-11-20 10:00:50.943632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.395 qpair failed and we were unable to recover it. 00:27:14.395 [2024-11-20 10:00:50.943823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.943920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.944154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.944191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.944322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.944399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.944669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.944735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.944979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.945044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.945178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.945207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.945378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.945433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.945643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.945695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.945856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.945912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.946027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.946057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.946185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.946217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.946328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.946358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.946513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.946543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.946671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.946701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.946822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.946852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.946987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.947016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.947152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.947182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.947284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.947321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.947457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.947486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.947593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.947623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.947758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.947788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.947943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.947973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.948073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.948104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.948206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.948236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.948394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.948440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.948554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.948588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.948720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.948750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.948879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.948910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.949043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.949073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.949175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.949206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.949406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.949473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.949659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.949688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.949890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.949954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.950131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.950161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.950294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.950330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.950438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.950469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.950633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.950688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.950920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.396 [2024-11-20 10:00:50.950974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.396 qpair failed and we were unable to recover it. 00:27:14.396 [2024-11-20 10:00:50.951148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.951204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.951334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.951365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.951507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.951562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.951665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.951701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.951900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.951968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.952204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.952286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.952504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.952534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.952702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.952768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.953027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.953091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.953375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.953405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.953533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.953563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.953687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.953734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.953951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.954015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.954195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.954225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.954358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.954388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.954488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.954517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.954725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.954793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.955092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.955156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.955391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.955421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.955554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.955584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.955714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.955744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.955868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.955898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.956040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.956104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.956356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.956397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.956553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.956600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.956854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.956917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.957177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.957242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.957466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.957496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.957592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.957622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.957730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.957760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.957875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.957905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.958068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.958132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.958400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.958431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.958559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.958615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.958891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.958954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.959201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.959268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.959457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.397 [2024-11-20 10:00:50.959487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.397 qpair failed and we were unable to recover it. 00:27:14.397 [2024-11-20 10:00:50.959599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.959629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.959733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.959762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.959893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.959922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.960078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.960141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.960410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.960440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.960569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.960599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.960733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.960767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.960856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.960886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.961141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.961205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.961476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.961506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.961730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.961794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.962043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.962076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.962210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.962259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.962401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.962432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.962640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.962719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.963008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.963071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.963351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.963381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.963544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.963574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.963796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.963859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.964105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.964169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.964448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.964479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.964573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.964603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.964733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.964764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.964989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.965052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.965258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.965288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.965425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.965455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.965544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.965574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.965804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.965833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.965958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.966005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.966243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.966272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.966413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.966443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.966641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.966891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.966955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.967223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.967288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.967480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.967511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.967726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.967755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.967988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.968052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.398 qpair failed and we were unable to recover it. 00:27:14.398 [2024-11-20 10:00:50.968326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.398 [2024-11-20 10:00:50.968383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.968507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.968536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.968661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.968691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.968919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.968984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.969268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.969374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.969577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.969643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.969902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.969966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.970242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.970343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.970584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.970647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.970837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.970918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.971181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.971246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.971517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.971580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.971872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.971935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.972173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.972238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.972475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.972539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.972824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.972888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.973196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.973261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.973554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.973624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.973888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.973952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.974160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.974223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.974476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.974541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.974758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.974822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.975063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.975126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.975432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.975498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.975733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.975797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.976084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.976147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.976353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.976417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.976706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.976769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.977049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.977112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.977378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.977455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.977744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.977807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.978101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.399 [2024-11-20 10:00:50.978164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.399 qpair failed and we were unable to recover it. 00:27:14.399 [2024-11-20 10:00:50.978422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.978490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.978762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.978826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.979029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.979095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.979356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.979422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.979669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.979733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.980032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.980096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.980354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.980421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.980678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.980742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.981009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.981072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.981370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.981434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.981725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.981788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.982087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.982151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.982375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.982440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.982734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.982797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.983041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.983105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.983328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.983394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.983654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.983688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.983826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.983866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.984086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.984150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.984418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.984483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.984784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.984817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.984967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.985000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.985209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.985606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.985669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.985921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.985984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.986207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.986271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.986547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.986580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.986747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.986780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.987036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.987099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.987350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.987416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.987706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.987769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.988022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.988086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.988376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.988444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.988745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.988810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.989067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.989133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.989407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.989472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.989717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.989789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.400 qpair failed and we were unable to recover it. 00:27:14.400 [2024-11-20 10:00:50.990028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.400 [2024-11-20 10:00:50.990091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.990384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.990449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.990723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.990788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.991047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.991110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.991369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.991434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.991707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.991772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.992016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.992082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.992359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.992425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.992636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.992704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.993004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.993067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.993350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.993419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.993692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.993757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.994048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.994081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.994300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.994366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.994631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.994695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.994965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.995029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.995328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.995392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.995605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.995672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.995973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.996038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.996290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.996380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.996572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.996649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.996899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.996973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.997263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.997345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.997586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.997651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.997878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.997942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.998140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.998207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.998483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.998548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.998842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.998907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.999214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.999278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.999595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:50.999658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:50.999950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.000014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:51.000322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.000387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:51.000637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.000701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:51.000993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.001056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:51.001292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.001396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:51.001662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.001697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:51.001916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.001989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:51.002270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.002356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.401 qpair failed and we were unable to recover it. 00:27:14.401 [2024-11-20 10:00:51.002588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.401 [2024-11-20 10:00:51.002664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.002954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.003017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.003266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.003353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.003645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.003709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.003906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.003974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.004233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.004297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.004542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.004607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.004894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.004957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.005203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.005267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.005464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.005499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.005649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.005685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.005905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.005940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.006093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.006128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.006252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.006286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.006433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.006467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.006639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.006673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.006822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.006855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.007067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.007101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.007213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.007247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.007384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.007419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.007565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.007599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.007807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.007873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.008137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.008212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.008439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.008473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.008584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.008617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.008754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.008787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.008973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.009039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.009337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.009397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.009568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.009629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.009928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.009992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.010249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.010333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.010509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.010543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.010728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.010792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.011060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.011125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.011404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.011438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.011550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.011583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.011895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.011959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.012170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.402 [2024-11-20 10:00:51.012237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.402 qpair failed and we were unable to recover it. 00:27:14.402 [2024-11-20 10:00:51.012481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.012516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.012719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.012784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.013078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.013142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.013402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.013436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.013653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.013726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.013966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.014031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.014280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.014362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.014505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.014538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.014795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.014859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.015143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.015208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.015439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.015473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.015725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.015790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.016076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.016140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.016374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.016409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.016550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.016583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.016731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.016765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.017022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.017086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.017386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.017421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.017563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.017597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.017800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.017863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.018159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.018223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.018540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.018605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.018893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.018956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.019236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.019269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.019422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.019461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.019725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.019792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.020091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.020155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.020399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.020464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.020690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.020755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.020994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.021058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.021273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.021354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.021648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.021713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.021960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.022026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.022331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.022396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.022650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.022685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.022855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.022889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.023175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.023239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.023475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.403 [2024-11-20 10:00:51.023545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.403 qpair failed and we were unable to recover it. 00:27:14.403 [2024-11-20 10:00:51.023877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.023911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.024033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.024067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.024253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.024336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.024572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.024650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.024931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.024995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.025244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.025340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.025603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.025667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.025914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.025980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.026268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.026312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.026477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.026511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.026769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.026835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.027142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.027207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.027487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.027553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.027833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.027868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.028007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.028040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.028204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.028268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.028580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.028646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.028886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.028950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.029284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.029367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.029655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.029718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.030009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.030064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.030333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.030402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.030666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.030731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.031023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.031087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.031321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.031387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.031643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.031708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.031942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.032023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.032236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.032321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.032548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.032612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.032858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.032922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.033149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.033216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.033481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.033546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.033848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.404 [2024-11-20 10:00:51.033912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.404 qpair failed and we were unable to recover it. 00:27:14.404 [2024-11-20 10:00:51.034175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.034238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.034527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.034592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.034833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.034897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.035163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.035541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.035606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.035891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.035956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.036158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.036226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.036557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.036622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.036887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.036951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.037210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.037274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.037529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.037593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.037838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.037903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.038205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.038268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.038501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.038567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.038782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.038846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.039149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.039213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.039524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.039590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.039851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.039915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.040168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.040231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.040555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.040620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.040899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.040964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.041149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.041213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.041493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.041558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.041859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.041923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.042221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.042285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.042575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.042639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.042928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.042991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.043181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.043248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.043520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.043585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.043840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.043904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.044164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.044228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.044506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.044572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.044863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.044926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.045287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.045542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.045606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.045827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.045891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.046182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.046246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.046530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.405 [2024-11-20 10:00:51.046595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.405 qpair failed and we were unable to recover it. 00:27:14.405 [2024-11-20 10:00:51.046847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.046914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.047170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.047236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.047480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.047548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.047836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.047900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.048204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.048267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.048616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.048680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.048972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.049037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.049301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.049386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.049608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.049672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.049943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.050008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.050274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.050357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.050648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.050711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.051005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.051068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.051342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.051409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.051715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.051779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.052041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.052104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.052366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.052433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.052734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.052799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.053040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.053103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.053388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.053454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.053712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.053777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.054036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.054101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.054331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.054397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.054681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.054746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.054967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.055031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.055293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.055372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.055660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.055723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.055974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.056037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.056364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.056431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.056723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.056788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.057086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.057151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.057437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.057503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.057761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.057825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.058079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.058144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.058368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.058434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.058654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.058729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.059018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.059082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.406 [2024-11-20 10:00:51.059298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.406 [2024-11-20 10:00:51.059374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.406 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.059584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.059648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.059904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.059969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.060197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.060260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.060517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.060584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.060858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.060923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.061216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.061249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.061411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.061445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.061578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.061612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.061808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.061873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.062115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.062178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.062463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.062530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.062839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.062902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.063126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.063191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.063473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.063539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.063763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.063830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.064078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.064145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.064368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.064449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.064702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.064768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.065071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.065134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.065372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.065437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.065655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.065718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.065942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.066005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.066292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.066374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.066654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.066718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.067026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.067090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.067385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.067452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.067754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.067818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.068116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.068180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.068466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.068532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.068774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.068838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.069039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.069105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.069358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.069424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.069673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.069739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.069994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.070059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.070284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.070363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.070631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.070695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.407 [2024-11-20 10:00:51.070945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.407 [2024-11-20 10:00:51.071009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.407 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.071211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.071285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.071617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.071681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.071937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.072002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.072296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.072394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.072661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.072726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.072980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.073046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.073346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.073412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.073677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.073749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.074035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.074099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.074385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.074450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.074674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.074740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.074996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.075059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.075322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.075391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.075598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.075663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.075963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.076028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.076279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.076397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.076704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.076767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.077026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.077093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.077344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.077411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.077710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.077774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.078038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.078102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.078398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.078464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.078705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.078768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.079024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.079089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.079281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.079360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.079662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.079725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.079986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.080050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.080324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.080391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.080594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.080658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.080901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.080966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.081265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.081349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.081612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.081675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.081927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.081990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.082280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.082364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.082579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.082640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.082880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.082944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.083236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.083300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.408 qpair failed and we were unable to recover it. 00:27:14.408 [2024-11-20 10:00:51.083622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.408 [2024-11-20 10:00:51.083686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.083917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.083980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.084267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.084367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.084660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.084723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.084986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.085052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.085298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.085384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.085644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.085707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.085993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.086057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.086333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.086397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.086616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.086681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.086967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.087031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.087224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.087288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.087603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.087668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.087959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.088022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.088223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.088286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.088622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.088687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.088976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.089040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.089356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.089422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.089730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.089794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.090012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.090075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.090344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.090410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.090662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.090730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.090953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.091018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.091271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.091350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.091592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.091656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.091855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.091919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.092121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.092187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.092493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.092558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.092749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.092815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.093053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.093117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.093357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.093432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.093678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.093755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.094043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.094107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.409 qpair failed and we were unable to recover it. 00:27:14.409 [2024-11-20 10:00:51.094408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.409 [2024-11-20 10:00:51.094473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.094771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.094834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.095088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.095152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.095401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.095466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.095755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.095819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.096048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.096111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.096355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.096423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.096672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.096738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.096995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.097060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.097347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.097412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.097659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.097724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.098041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.098105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.098328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.098395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.098640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.098707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.098926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.098993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.099246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.099332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.099542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.099606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.099892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.099958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.100249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.100282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.100442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.100476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.100681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.100745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.100949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.101013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.101271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.101355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.101642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.101705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.101963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.102027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.102333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.102398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.102613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.102679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.102951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.103015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.103288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.103368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.103654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.103717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.103975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.104038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.104267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.104351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.104596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.104659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.104909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.104975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.105274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.105366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.105574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.105638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.105887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.105950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.106217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.410 [2024-11-20 10:00:51.106292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.410 qpair failed and we were unable to recover it. 00:27:14.410 [2024-11-20 10:00:51.106547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.106610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.106840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.106904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.107199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.107262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.107530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.107593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.107848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.107911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.108132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.108198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.108462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.108527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.108785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.108848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.109131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.109194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.109442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.109508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.109798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.109862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.110162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.110226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.110546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.110611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.110872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.110936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.111184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.111250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.111568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.111632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.111848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.111912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.112173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.112236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.112548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.112613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.112899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.112963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.113259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.113362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.113609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.113674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.113921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.113985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.114285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.114369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.114628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.114691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.114919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.114982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.115209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.115274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.115526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.115592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.115798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.115865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.116130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.116194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.116429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.116498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.116736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.116800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.117063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.117128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.117322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.117388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.117655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.117719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.117960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.117993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.118092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.118125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.411 [2024-11-20 10:00:51.118244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.411 [2024-11-20 10:00:51.118277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.411 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.118402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.118436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.118691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.118765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.119066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.119119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.119419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.119485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.119754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.119817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.120080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.120146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.120384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.120452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.120739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.120803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.121056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.121122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.121423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.121487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.121779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.121842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.122093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.122158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.122401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.122467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.122682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.122749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.122949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.123013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.123334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.123400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.123628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.123691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.123990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.124055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.124334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.124400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.124619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.124683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.124980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.125043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.125336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.125401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.125661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.125725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.125979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.126048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.126329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.126395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.126680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.126743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.126997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.127062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.127358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.127422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.127655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.127719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.128005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.128069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.128359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.128424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.128703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.128769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.129028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.129092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.129367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.129435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.129690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.129753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.129994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.130057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.130331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.130397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.412 qpair failed and we were unable to recover it. 00:27:14.412 [2024-11-20 10:00:51.130694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.412 [2024-11-20 10:00:51.130757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.131006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.131070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.131337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.131403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.131673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.131736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.132023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.132097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.132322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.132388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.132635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.132699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.132925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.132989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.133245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.133333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.133638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.133702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.133993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.134057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.134316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.134383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.134603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.134668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.134962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.135026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.135335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.135401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.135658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.135723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.135978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.136042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.136326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.136391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.136695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.136759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.137017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.137082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.137348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.137414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.137713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.137780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.138045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.138108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.138402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.138467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.138734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.138798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.139087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.139150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.139406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.139471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.139728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.139793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.140081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.140144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.140435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.140504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.140757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.140822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.141062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.141126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.141419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.141485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.413 [2024-11-20 10:00:51.141773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.413 [2024-11-20 10:00:51.141837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.413 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.142078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.142141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.142373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.142440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.142718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.142781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.143039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.143103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.143335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.143403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.143657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.143720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.143978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.144042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.144270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.144349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.144648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.144711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.144976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.145039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.145290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.145384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.145679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.145743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.145992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.146056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.146277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.146361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.146659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.146724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.146995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.147058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.147329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.147397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.147701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.147765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.148012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.148076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.148375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.148441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.148658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.148722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.148979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.149043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.149252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.149340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.149595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.149659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.149930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.149995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.150244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.150323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.150633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.150697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.150988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.151051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.151270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.151353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.151584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.151648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.151950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.152013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.152227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.152291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.152601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.152665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.152912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.152975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.153267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.153363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.153625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.153689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.153979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.154043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.414 [2024-11-20 10:00:51.154351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.414 [2024-11-20 10:00:51.154418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.414 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.154653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.154721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.155014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.155078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.155368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.155435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.155683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.155747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.156030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.156095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.156345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.156411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.156646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.156711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.157001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.157064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.157331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.157397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.157653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.157717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.157959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.158024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.158259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.158340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.158635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.158710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.158962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.159026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.159332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.159398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.159688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.159753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.160040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.160105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.160379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.160444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.160693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.160756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.161017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.161081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.161386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.161451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.161652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.161720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.161986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.162051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.162347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.162412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.162676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.162739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.162993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.163058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.163335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.163402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.163668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.163733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.163939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.164005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.164257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.164339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.164633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.164698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.164996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.165060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.165295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.165379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.165618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.165683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.165925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.165991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.166234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.166298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.166609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.166673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.415 qpair failed and we were unable to recover it. 00:27:14.415 [2024-11-20 10:00:51.166927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.415 [2024-11-20 10:00:51.166991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.167229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.167293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.167591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.167658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.167948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.168012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.168300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.168383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.168679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.168744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.168987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.169050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.169333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.169399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.169620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.169686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.169982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.170045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.170345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.170411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.170699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.170763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.171054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.171118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.171367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.171432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.171716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.171780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.172029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.172105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.172349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.172415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.172705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.172770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.173053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.173117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.173406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.173472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.173773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.173844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.174124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.174189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.174588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.174657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.174925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.174990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.175252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.175336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.175606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.175670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.175866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.175930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.176187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.176251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.176568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.176632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.176868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.176932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.177220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.177284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.177551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.177615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.177862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.177927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.178215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.178280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.178564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.178628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.178919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.178983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.179277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.179360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.179579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.416 [2024-11-20 10:00:51.179643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.416 qpair failed and we were unable to recover it. 00:27:14.416 [2024-11-20 10:00:51.179939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.180003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.180258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.180339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.180634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.180698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.180954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.181018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.181285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.181387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.181649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.181713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.181972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.182036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.182290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.182377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.182688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.182752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.183017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.183080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.183337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.183402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.183698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.183761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.184017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.184083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.184362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.184430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.184728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.184792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.185023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.185086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.185382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.185447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.185735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.185819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.186074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.186139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.186375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.186440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.186677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.186741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.186987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.187050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.187319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.187389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.187643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.187708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.187936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.188000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.188246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.188329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.188566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.188632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.188936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.189000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.189252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.189333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.189605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.189669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.189968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.190032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.190248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.190348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.190622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.190687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.190985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.191048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.191330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.191397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.191694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.191758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.191984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.417 [2024-11-20 10:00:51.192047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.417 qpair failed and we were unable to recover it. 00:27:14.417 [2024-11-20 10:00:51.192337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.192403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.192698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.192764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.192982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.193045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.193260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.193340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.193569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.193635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.193887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.193952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.194207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.194271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.194520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.194584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.194835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.194899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.195184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.195248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.195468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.195535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.195742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.195806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.196070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.196134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.196422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.196487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.196789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.196853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.197102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.197167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.197454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.197519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.197817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.197880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.198123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.198188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.198461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.198531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.198776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.198851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.199075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.199141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.199393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.199459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.199756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.199820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.200074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.200138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.200366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.200432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.200634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.200701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.200948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.201012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.201220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.201285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.201600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.201663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.201944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.202008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.202251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.202329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.202624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.202689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.202986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.418 [2024-11-20 10:00:51.203049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.418 qpair failed and we were unable to recover it. 00:27:14.418 [2024-11-20 10:00:51.203329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.203393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.203639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.203704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.203957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.204022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.204253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.204337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.204588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.204652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.204931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.204996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.205290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.205372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.205639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.205705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.205960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.206025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.206228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.206292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.206627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.206691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.206938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.207005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.207203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.207269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.207546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.207611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.207916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.207980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.208192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.208258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.208524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.208592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.208818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.208881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.209178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.209242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.209514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.209581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.209792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.209857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.210065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.210129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.210339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.210407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.210666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.210731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.211028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.211092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.211392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.211457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.211707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.211785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.212078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.212144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.212436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.212501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.212751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.212815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.213061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.213127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.213388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.213454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.213655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.213721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.213965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.214029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.214367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.214433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.214688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.214755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.214980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.419 [2024-11-20 10:00:51.215047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.419 qpair failed and we were unable to recover it. 00:27:14.419 [2024-11-20 10:00:51.215333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.215398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.215646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.215710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.215911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.215975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.216241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.216320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.216575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.216642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.216939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.217002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.217284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.217362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.217640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.217704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.217962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.218025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.218236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.218321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.218593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.218658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.218964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.219028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.219341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.219407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.219707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.219772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.220034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.220098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.220412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.220494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.220784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.220851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.221081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.221116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.221429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.221470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.221620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.221656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.221849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.221885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.222077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.222142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.222417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.222454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.222603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.222685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.222945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.223013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.223260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.223346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.223553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.223614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.223879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.223956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.224282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.224375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.224528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.224570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.224751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.224787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.225090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.225154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.225396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.225432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.225545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.225612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.225868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.225931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.226236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.226299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.226524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.226559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.226717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-11-20 10:00:51.226782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.420 qpair failed and we were unable to recover it. 00:27:14.420 [2024-11-20 10:00:51.227073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.227138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.227390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.227426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.227580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.227616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.227803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.227870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.228176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.228240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.228477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.228512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.228727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.228791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.229035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.229099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.229361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.229397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.229552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.229616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.229878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.229942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.230239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.230316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.230472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.230507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.230721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.230785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.231028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.231093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.231398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.231434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.231644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.231708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.232006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.232069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.232285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.232330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.232458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.232493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.232643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.232679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.232911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.232975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.233248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.233329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.233558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.233626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.233915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.233979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.234229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.234293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.234583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.234649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.234910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.234974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.235238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.235323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.235592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.235657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.235952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.236016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.236269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.236363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.236678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.236742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.236991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.237054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.237265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.237352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.237612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.237676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.237941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.238004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.238327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-11-20 10:00:51.238393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.421 qpair failed and we were unable to recover it. 00:27:14.421 [2024-11-20 10:00:51.238637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.238703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.238951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.239014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.239272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.239378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.239642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.239707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.240000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.240063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.240336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.240402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.240666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.240729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.240998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.241062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.241331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.241396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.241665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.241730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.241994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.242058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.242338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.242404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.242626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.242693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.242983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.243047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.243291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.243371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.243646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.243711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.243959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.244023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.244265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.244344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.244646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.244712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.244979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.245042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.245277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.245358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.245609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.245673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.245960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.246024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.246334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.246400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.246656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.246720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.246924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.246987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.247227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.247291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.247546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.247611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.247892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.247956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.248184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.248236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.248506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.248565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.248773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.248841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.249134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.249199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.422 qpair failed and we were unable to recover it. 00:27:14.422 [2024-11-20 10:00:51.249471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-11-20 10:00:51.249532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.249821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.249884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.250178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.250244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.250516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.250569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.250877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.250940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.251252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.251351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.251586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.251665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.251963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.252027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.252229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.252293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.252522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.252574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.252833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.252897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.253130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.253203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.253499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.253552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.253834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.253898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.254161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.254228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.254499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.254552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.254789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.254852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.255143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.255207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.255491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.255544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.255731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.255797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.256025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.256090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.256404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.256457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.256634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.256686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.256871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.256945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.257246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.257325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.257529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.257563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.257696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.423 [2024-11-20 10:00:51.257730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.423 qpair failed and we were unable to recover it. 00:27:14.423 [2024-11-20 10:00:51.257871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.257911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.258043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.258077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.258218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.258253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.258434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.258468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.258580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.258614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.258783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.258816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.258951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.258984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.259122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.259156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.259299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.259342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.259469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.259502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.259622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.259655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.259805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.259838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.259948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.259982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.260122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.260156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.260278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.260337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.260477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.260509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.260648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.260680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.260810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.260841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.260976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.261008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.261124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.261156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.261288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.261331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.261509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.261541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.261680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.261714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.261857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.424 [2024-11-20 10:00:51.261889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.424 qpair failed and we were unable to recover it. 00:27:14.424 [2024-11-20 10:00:51.261985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.262017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.262134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.262166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.262298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.262341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.262455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.262487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.262619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.262651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.262757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.262789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.262903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.262937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.263077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.263109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.263270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.263321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.263436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.263468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.263636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.263668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.263782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.263813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.263924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.263957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.264052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.264115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.264379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.264411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.264553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.264585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.264750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.264787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.264960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.264992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.265109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.265141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.265284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.265324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.265444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.265476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.265582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.265614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.265762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.265794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.265957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.265989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.266102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.266134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.425 qpair failed and we were unable to recover it. 00:27:14.425 [2024-11-20 10:00:51.266241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.425 [2024-11-20 10:00:51.266273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.266430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.266462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.266618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.266651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.266754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.266787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.266921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.266954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.267070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.267103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.267235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.267267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.267409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.267442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.267550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.267582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.267717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.267749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.267882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.267914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.268029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.268063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.268220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.268252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.268407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.268440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.268534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.268566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.268680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.268712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.268861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.268893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.269031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.269065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.269215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.269251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.269409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.269442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.269560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.269592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.269734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.269766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.269933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.269964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.270105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.270137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.426 qpair failed and we were unable to recover it. 00:27:14.426 [2024-11-20 10:00:51.270319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.426 [2024-11-20 10:00:51.270352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.270490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.270521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.270664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.270695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.270861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.270893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.271026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.271058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.271191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.271223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.271353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.271385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.271524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.271561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.271686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.271718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.271884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.271916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.272050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.272083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.272183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.272215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.272319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.272352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.272493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.272615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.272647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.272794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.272826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.272960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.272993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.273166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.273217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.273419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.273453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.273624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.273656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.273793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.273825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.274001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.274044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.274178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.274211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.274363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.274396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.427 [2024-11-20 10:00:51.274508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.427 [2024-11-20 10:00:51.274540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.427 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.274651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.274683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.274822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.274854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.274992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.275025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.275197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.275228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.275386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.275420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.275528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.275560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.275666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.275698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.275834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.275866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.275995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.276028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.276138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.276170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.276264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.276296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.276447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.276480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.276627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.276659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.276790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.276822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.276917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.276949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.277085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.277117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.277248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.277279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.277428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.277460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.277599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.277632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.277773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.277805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.277936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.277969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.278108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.278142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.278244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.278281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.278397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.278430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.278570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.428 [2024-11-20 10:00:51.278602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-11-20 10:00:51.278736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.278769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.278935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.278967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.279077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.279109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.279271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.279312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.279424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.279456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.279574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.279606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.279729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.279762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.279883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.279915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.280057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.280089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.280230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.280261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.280391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.280424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.280540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.280572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.280701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.280733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.280900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.280932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.281036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.281068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.281197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.281229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.281346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.281380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.281475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.281507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.281682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.281713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.281823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.281857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.281966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.281998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.282128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.282159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.282213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121bf30 (9): Bad file descriptor 00:27:14.429 [2024-11-20 10:00:51.282476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.282526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.282732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.282807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.282966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.283026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.283208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.429 [2024-11-20 10:00:51.283242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-11-20 10:00:51.283385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.430 [2024-11-20 10:00:51.283419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.430 [2024-11-20 10:00:51.283641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.430 [2024-11-20 10:00:51.283704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.430 [2024-11-20 10:00:51.284027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.430 [2024-11-20 10:00:51.284091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.284349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.284382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.284526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.284558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.284730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.284762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.284935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.284968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.285149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.285181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.285294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.285333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.285469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.285502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.285647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.285680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.285836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.285868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.286003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.286036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-11-20 10:00:51.286148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.704 [2024-11-20 10:00:51.286181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.286351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.286385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.286497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.286530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.286672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.286705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.286872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.286906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.287045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.287077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.287216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.287249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.287372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.287406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.287552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.287585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.287852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.287916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.288137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.288201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.288412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.288448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.288597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.288631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.288761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.288794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.289021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.289084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.289375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.289410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.289577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.289611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.289881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.289944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.290176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.290209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.290355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.290408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.290613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.290689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.290973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.291036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.291278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.291318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.291435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.291468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.291727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.291791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.292110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.292184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.292432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.292464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.292625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.292657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.292859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.292910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.293149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.293215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.293466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.293517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.293696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.293746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.294014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.294048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.294189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.294222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.294510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.294562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.294845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.294909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.295200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.295232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.295379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.295411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.705 qpair failed and we were unable to recover it. 00:27:14.705 [2024-11-20 10:00:51.295687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.705 [2024-11-20 10:00:51.295750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.296037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.296101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.296342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.296397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.296630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.296665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.296802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.296835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.296986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.297050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.297257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.297340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.297597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.297663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.297912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.297946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.298063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.298097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.298223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.298256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.298503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.298536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.298649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.298682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.298822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.298854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.299100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.299181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.299453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.299488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.299615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.299649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.299878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.299911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.300059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.300093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.300347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.300380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.300494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.300527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.300712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.300745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.300889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.300922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.301020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.301054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.301194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.301227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.301483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.301516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.301660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.301693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.301912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.301976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.302281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.302319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.302449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.302482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.302723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.302787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.303012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.303045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.303156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.303230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.303478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.303544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.303823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.303887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.304229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.304460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.304525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.706 [2024-11-20 10:00:51.304819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.706 [2024-11-20 10:00:51.304883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.706 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.305094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.305158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.305448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.305514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.305798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.305863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.306108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.306145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.306281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.306323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.306502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.306568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.306848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.306911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.307155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.307222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.307508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.307573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.307846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.307878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.308013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.308045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.308176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.308208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.308447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.308479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.308601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.308634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.308778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.308811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.309033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.309096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.309357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.309423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.309694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.309758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.310012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.310076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.310341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.310409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.310626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.310689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.310928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.310992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.311183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.311248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.311523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.311590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.311850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.311914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.312110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.312174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.312458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.312523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.312808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.312872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.313164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.313229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.313505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.313537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.313672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.313704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.313854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.313886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.314094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.314157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.314443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.314509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.314792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.314857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.315094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.315158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.315440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.315505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.315765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.707 [2024-11-20 10:00:51.315830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.707 qpair failed and we were unable to recover it. 00:27:14.707 [2024-11-20 10:00:51.316115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.316178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.316428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.316495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.316713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.316779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.317043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.317107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.317404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.317468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.317701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.317765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.318022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.318086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.318379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.318412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.318544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.318576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.318851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.318916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.319210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.319274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.319489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.319553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.319791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.319822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.319958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.319990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.320238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.320270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.320425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.320482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.320770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.320834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.321128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.321191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.321413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.321478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.321738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.321801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.322016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.322080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.322375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.322442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.322684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.322747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.323037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.323100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.323354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.323420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.323664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.323728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.324031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.324094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.324334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.324400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.324646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.324710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.324943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.325007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.325288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.325364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.325670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.325734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.326002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.326066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.326355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.326431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.326674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.326739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.326970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.327034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.327333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.327365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.327508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.708 [2024-11-20 10:00:51.327543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.708 qpair failed and we were unable to recover it. 00:27:14.708 [2024-11-20 10:00:51.327789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.327854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.328096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.328160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.328429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.328495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.328706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.328771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.329015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.329079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.329370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.329436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.329736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.329802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.330043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.330107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.330397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.330430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.330562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.330594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.330732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.330764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.330988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.331052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.331288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.331365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.331613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.331679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.331918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.331950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.332089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.332122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.332342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.332403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.332544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.332577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.332840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.332903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.333189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.333253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.333564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.333628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.333861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.333925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.334216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.334291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.334577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.334645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.334934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.334998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.335243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.335275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.335452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.335484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.709 [2024-11-20 10:00:51.335744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.709 [2024-11-20 10:00:51.335776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.709 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.335914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.335947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.336195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.336259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.336520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.336585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.336803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.336867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.337156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.337188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.337350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.337383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.337617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.337681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.337971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.338034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.338339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.338405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.338641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.338706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.338991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.339055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.339341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.339412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.339644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.339709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.339955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.340018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.340263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.340359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.340655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.340687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.340825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.340857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.341050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.341113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.341343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.341410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.341651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.341716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.341940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.342003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.342214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.342288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.342589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.342654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.342935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.342966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.343100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.343134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.343323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.343389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.343634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.343697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.343925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.343994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.344177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.344241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.344560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.344624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.344836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.344900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.345162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.345226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.345518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.345582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.345820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.345884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.346123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.346187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.346492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.346524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.346689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.346721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.346953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.710 [2024-11-20 10:00:51.347016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.710 qpair failed and we were unable to recover it. 00:27:14.710 [2024-11-20 10:00:51.347324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.347399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.347677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.347740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.348031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.348094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.348391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.348457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.348706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.348770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.349055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.349117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.349361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.349415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.349577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.349609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.349848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.349911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.350154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.350218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.350488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.350554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.350852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.350915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.351211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.351273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.351575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.351607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.351770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.351801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.352017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.352080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.352342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.352409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.352705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.352769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.353057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.353120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.353382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.353447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.353687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.353719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.353832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.353863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.354026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.354090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.354356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.354421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.354706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.354779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.355071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.355134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.355363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.355430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.355717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.355780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.356029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.356093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.356352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.356417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.356649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.356713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.357001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.357064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.357296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.357376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.357675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.357738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.357995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.358058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.358344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.358409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.358657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.358723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.358982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.359048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.711 [2024-11-20 10:00:51.359324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.711 [2024-11-20 10:00:51.359391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.711 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.359663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.359728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.359965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.360028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.360335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.360401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.360668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.360701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.360834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.360866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.361099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.361161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.361411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.361476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.361733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.361799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.362045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.362110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.362366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.362431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.362684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.362751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.363050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.363114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.363404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.363479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.363787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.363850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.364096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.364159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.364470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.364536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.364900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.364932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.365067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.365100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.365242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.365274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.365607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.365671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.365875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.365939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.366185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.366217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.366354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.366387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.366559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.366622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.366824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.366887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.367176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.367239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.367469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.367521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.367824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.367888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.368187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.368250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.368559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.368591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.368703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.368734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.368962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.369026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.369326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.369390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.369631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.369695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.369898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.369962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.370217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.370280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.370577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.370641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.712 [2024-11-20 10:00:51.370930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.712 [2024-11-20 10:00:51.370994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.712 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.371250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.371281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.371420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.371458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.371588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.371620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.371759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.371822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.372071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.372134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.372402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.372469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.372686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.372759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.373006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.373071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.373330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.373394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.373675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.373740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.374010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.374077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.374313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.374346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.374466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.374499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.374717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.374781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.375042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.375105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.375362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.375429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.375623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.375687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.375988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.376052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.376300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.376386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.376660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.376723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.376957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.377006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.377132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.377164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.377323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.377389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.377634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.377698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.377944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.378007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.378270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.378354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.378586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.378649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.378908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.378971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.379228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.379291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.379605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.379670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.379916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.379981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.380174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.380240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.380481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.380546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.380797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.380861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.381083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.381146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.381397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.381430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.381590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.713 [2024-11-20 10:00:51.381622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.713 qpair failed and we were unable to recover it. 00:27:14.713 [2024-11-20 10:00:51.381861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.381924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.382187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.382251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.382531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.382595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.382876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.382939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.383169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.383232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.383557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.383623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.383909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.383941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.384181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.384244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.384497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.384561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.384847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.384911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.385161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.385225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.385506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.385570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.385797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.385861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.386142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.386206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.386483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.386549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.386805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.386869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.387167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.387230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.387502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.387567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.387802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.387866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.388162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.388226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.388560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.388625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.388827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.388894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.389178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.389241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.389548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.389613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.389886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.389949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.390175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.390241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.390555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.390621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.390843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.390907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.391200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.391264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.391487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.391806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.391869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.392088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.392152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.392404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.392484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.392774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.392837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.714 [2024-11-20 10:00:51.393117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.714 [2024-11-20 10:00:51.393180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.714 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.393456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.393520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.393762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.393825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.394114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.394178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.394459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.394524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.394764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.394828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.395111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.395173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.395366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.395430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.395711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.395776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.396063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.396127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.396416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.396481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.396771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.396834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.397097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.397164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.397451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.397516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.397770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.397834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.398049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.398115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.398376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.398444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.398718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.398782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.399036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.399101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.399389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.399454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.399707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.399771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.400060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.400124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.400349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.400415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.400656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.400720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.400951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.401014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.401254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.401354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.401645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.401709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.401988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.402052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.402357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.402424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.402702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.402766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.403053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.403116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.403376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.403441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.403729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.403792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.404011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.404075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.404338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.404404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.404607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.404671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.404909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.404973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.405212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.405275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.715 [2024-11-20 10:00:51.405597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.715 [2024-11-20 10:00:51.405662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.715 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.405917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.405981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.406260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.406356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.406651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.406716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.406961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.407024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.407249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.407328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.407626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.407691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.407945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.408008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.408296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.408379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.408683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.408748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.409069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.409357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.409422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.409667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.409733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.410024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.410087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.410298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.410406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.410665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.410729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.410991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.411055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.411283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.411363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.411600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.411664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.411899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.411962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.412247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.412322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.412617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.412680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.412920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.412985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.413273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.413355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.413616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.413681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.413969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.414034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.414334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.414398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.414695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.414759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.415066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.415131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.415385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.415450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.415733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.415797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.416032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.416096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.416369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.416434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.416651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.416715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.416947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.417011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.417246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.417323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.417616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.417680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.417886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.417950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.418153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.716 [2024-11-20 10:00:51.418218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.716 qpair failed and we were unable to recover it. 00:27:14.716 [2024-11-20 10:00:51.418533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.418599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.418889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.418953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.419201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.419267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.419572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.419637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.419932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.419995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.420240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.420326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.420518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.420582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.420790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.420853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.421107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.421170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.421428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.421496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.421731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.421794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.422020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.422083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.422346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.422410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.422643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.422707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.422949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.423013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.423262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.423339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.423580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.423660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.423940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.424004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.424292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.424370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.424627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.424691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.425024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.425087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.425337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.425403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.425694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.425757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.425987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.426050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.426355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.426419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.426715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.426779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.427023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.427085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.427377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.427441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.427729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.427793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.428077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.428140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.428438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.428503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.428804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.428868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.429138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.429202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.429490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.429555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.429791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.429854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.430106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.430169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.430449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.430514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.430790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.430854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.717 qpair failed and we were unable to recover it. 00:27:14.717 [2024-11-20 10:00:51.431070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.717 [2024-11-20 10:00:51.431133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.431421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.431485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.431684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.431747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.431990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.432053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.432315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.432380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.432600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.432672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.432930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.432993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.433247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.433325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.433614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.433677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.433923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.433987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.434288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.434379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.434677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.434740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.434990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.435053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.435253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.435338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.435570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.435633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.435924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.435988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.436244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.436328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.436594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.436658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.436902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.436966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.437273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.437357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.437602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.437665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.437892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.437955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.438189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.438253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.438565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.438629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.438909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.438974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.439244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.439327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.439577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.439642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.439845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.439911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.440210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.440274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.440583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.440647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.440903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.440966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.441225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.441289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.441565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.441640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.441921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.441984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.718 qpair failed and we were unable to recover it. 00:27:14.718 [2024-11-20 10:00:51.442240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.718 [2024-11-20 10:00:51.442321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.442602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.442666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.442909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.442972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.443259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.443341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.443575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.443639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.443933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.443996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.444238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.444319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.444613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.444678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.444936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.444999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.445246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.445327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.445614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.445678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.445968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.446031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.446292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.446392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.446685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.446750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.447036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.447100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.447300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.447388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.447683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.447747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.448033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.448098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.448392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.448458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.448709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.448773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.449060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.449124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.449423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.449487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.449770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.449833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.450086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.450150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.450383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.450448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.450746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.450809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.451108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.451173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.451455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.451520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.451743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.451806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.452058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.452121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.452403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.452470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.452772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.452836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.453131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.453194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.453483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.453548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.453827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.453891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.454170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.454233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.454506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.454572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.454834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.454897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.719 [2024-11-20 10:00:51.455148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.719 [2024-11-20 10:00:51.455214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.719 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.455531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.455598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.455879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.455942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.456178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.456242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.456504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.456569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.456818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.456884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.457136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.457203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.457463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.457528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.457819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.457883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.458127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.458191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.458443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.458507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.458779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.458843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.459098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.459162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.459451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.459516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.459765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.459829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.460126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.460190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.460469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.460534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.460796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.460860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.461097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.461161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.461453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.461520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.461810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.461874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.462115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.462182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.462475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.462539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.462746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.462811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.463078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.463142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.463367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.463432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.463724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.463788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.464073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.464138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.464398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.464473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.464753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.464817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.465074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.465138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.465378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.465442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.465731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.465794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.466036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.466100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.466355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.466419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.466694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.466757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.466974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.467038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.467262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.467346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.467609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.720 [2024-11-20 10:00:51.467673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.720 qpair failed and we were unable to recover it. 00:27:14.720 [2024-11-20 10:00:51.467917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.467981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.468279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.468362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.468600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.468663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.468877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.468940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.469188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.469252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.469524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.469592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.469836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.469901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.470148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.470211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.470524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.470589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.470843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.470907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.471190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.471254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.471514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.471579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.471823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.471886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.472177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.472241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.472499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.472564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.472792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.472854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.473104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.473179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.473419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.473485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.473744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.473807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.474094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.474158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.474449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.474515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.474780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.474844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.475074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.475137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.475398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.475464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.475728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.475795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.476096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.476161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.476428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.476494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.476779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.476843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.477091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.477155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.477391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.477457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.477755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.477818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.478087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.478151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.478445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.478512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.478759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.478825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.479115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.479179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.479460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.479526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.479813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.479875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.480152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.721 [2024-11-20 10:00:51.480216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.721 qpair failed and we were unable to recover it. 00:27:14.721 [2024-11-20 10:00:51.480467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.480532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.480719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.480783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.481074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.481139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.481340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.481406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.481662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.481727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.481972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.482046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.482357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.482422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.482721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.482785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.482976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.483040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.483333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.483398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.483681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.483746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.484034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.484098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.484350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.484415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.484697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.484761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.485049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.485112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.485353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.485418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.485666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.485730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.486007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.486071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.486262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.486343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.486647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.486711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.486937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.487001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.487193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.487256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.487560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.487624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.487849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.487912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.488200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.488263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.488563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.488627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.488881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.488944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.489144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.489207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.489506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.489571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.489763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.489828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.490091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.490155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.490393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.490458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.490684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.490746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.491007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.491071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.491334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.491399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.491651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.491714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.491961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.492024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.492243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.492321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.722 [2024-11-20 10:00:51.492528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.722 [2024-11-20 10:00:51.492591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.722 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.492836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.492898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.493156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.493220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.493530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.493594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.493837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.493901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.494135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.494198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.494464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.494528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.494815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.494879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.495121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.495196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.495499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.495564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.495819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.495882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.496138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.496202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.496509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.496574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.496862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.496926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.497180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.497244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.497516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.497581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.497869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.497933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.498128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.498191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.498486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.498551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.498792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.498855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.499098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.499162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.499393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.499457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.499751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.499815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.500046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.500110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.500337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.500402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.500650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.500713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.500961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.501025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.501298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.501383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.501605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.501672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.501963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.502026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.502283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.502362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.502593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.502657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.502942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.503005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.503322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.503388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.723 [2024-11-20 10:00:51.503675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.723 [2024-11-20 10:00:51.503739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.723 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.504035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.504108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.504370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.504437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.504724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.504787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.505012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.505075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.505359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.505424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.505682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.505747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.506042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.506105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.506388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.506453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.506667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.506733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.506980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.507046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.507244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.507323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.507563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.507627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.507851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.507915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.508206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.508271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.508592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.508656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.508874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.508937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.509167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.509230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.509493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.509559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.509848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.509911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.510191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.510255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.510485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.510549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.510836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.510899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.511196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.511259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.511563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.511627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.511873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.511940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.512194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.512260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.512494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.512559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.512844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.512918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.513207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.513272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.513585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.513650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.513849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.513912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.514154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.514218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.514461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.514529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.514815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.514880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.515174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.515238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.515533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.515597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.515814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.515877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.516111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.724 [2024-11-20 10:00:51.516175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.724 qpair failed and we were unable to recover it. 00:27:14.724 [2024-11-20 10:00:51.516411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.516476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.516763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.516827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.517015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.517079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.517332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.517398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.517626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.517690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.517948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.518012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.518314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.518380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.518636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.518699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.518942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.519006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.519259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.519338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.519630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.519694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.519955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.520019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.520331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.520396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.520677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.520741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.521023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.521086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.521370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.521434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.521731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.521795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.522047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.522111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.522403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.522468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.522756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.522820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.523039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.523103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.523384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.523448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.523730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.523794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.524088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.524152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.524430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.524495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.524733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.524797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.525040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.525103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.525383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.525449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.525734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.525797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.526040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.526103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.526330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.526395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.526640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.526708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.526963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.527029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.527277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.527373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.527677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.527740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.528023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.528085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.528372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.528437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.528731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.725 [2024-11-20 10:00:51.528795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.725 qpair failed and we were unable to recover it. 00:27:14.725 [2024-11-20 10:00:51.528999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.529064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.529326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.529391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.529643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.529707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.529987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.530050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.530297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.530398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.530648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.530712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.531022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.531086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.531389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.531454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.531669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.531732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.532020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.532083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.532366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.532431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.532697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.532761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.533011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.533076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.533369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.533434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.533680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.533744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.533978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.534041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.534331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.534396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.534652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.534716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.535006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.535069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.535338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.535413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.535776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.536013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.536076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.536358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.536423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.536714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.536778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.536988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.537051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.537299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.537377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.537611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.537675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.537916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.537979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.538213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.538277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.538517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.538581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.538844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.538907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.539190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.539253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.539570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.539634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.539895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.539959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.540249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.540330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.540549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.540612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.540896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.540960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.541246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.541327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.726 qpair failed and we were unable to recover it. 00:27:14.726 [2024-11-20 10:00:51.541576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.726 [2024-11-20 10:00:51.541639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.541875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.541938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.542220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.542282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.542590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.542652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.542922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.542986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.543231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.543294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.543555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.543619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.543851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.543915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.544125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.544198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.544505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.544570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.544806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.544870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.545058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.545122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.545418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.545484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.545769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.545832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.546018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.546082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.546334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.546399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.546652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.546716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.546998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.547061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.547319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.547388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.547685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.547750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.548003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.548069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.548290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.548368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.548643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.548708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.548968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.549031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.549273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.549372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.549598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.549664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.549952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.550016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.550320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.550386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.550587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.550655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.550921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.550985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.551225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.551289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.551607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.551671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.551935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.551999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.552255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.727 [2024-11-20 10:00:51.552338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.727 qpair failed and we were unable to recover it. 00:27:14.727 [2024-11-20 10:00:51.552588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.552652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.552929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.552993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.553246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.553341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.553584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.553648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.553934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.553998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.554199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.554263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.554499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.554563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.554831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.554895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.555196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.555259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.555565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.555629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.555909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.555974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.556221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.556284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.556547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.556611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.556869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.556934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.557168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.557232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.557502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.557568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.557817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.557882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.558143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.558208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.558468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.558532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.558827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.558891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.559135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.559201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.559470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.559534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.559788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.559852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.560128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.560192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.560420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.560486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.560714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.560778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.561014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.561079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.561342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.561407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.561650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.561715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.561988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.562053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.562324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.562389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.562650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.562714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.563008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.563073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.563360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.563426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.563675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.563740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.563979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.564043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.564334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.564399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.564689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.564753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.728 [2024-11-20 10:00:51.565036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.728 [2024-11-20 10:00:51.565099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.728 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.565361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.565426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.565664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.565728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.566015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.566079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.566338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.566414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.566714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.566778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.567080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.567144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.567385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.567452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.567707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.567774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.568019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.568083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.568390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.568454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.568689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.568753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.569046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.569110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.569354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.569419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.569677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.569741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.570034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.570099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.570350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.570415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.570661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.570725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.570953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.571018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.571257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.571335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.571627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.571691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.571935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.571999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.572280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.572359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.572649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.572713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.572968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.573031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.573338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.573404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.573689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.573752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.574041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.574105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.574366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.574433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.574737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.574800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.575041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.575105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.575346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.575422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.575706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.575770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.575996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.576060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.576350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.576416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.576706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.576769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.577066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.577130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.577373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.577438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.729 [2024-11-20 10:00:51.577690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.729 [2024-11-20 10:00:51.577756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.729 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.578057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.578122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.578370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.578435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.578682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.578745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.579027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.579090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.579340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.579405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.579688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.579751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.579999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.580062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.580320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.580404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.580647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.580711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.580994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.581057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.581261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.581341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.581609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.581673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.581904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.581967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.582184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.582247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.582520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.582586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.582830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.582893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.583193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.583257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.583513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.583578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.583859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.583922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.584120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.584194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.584425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.584493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.584787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.584850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.585095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.585158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.585374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.585440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.585629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.585693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.585976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.586039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.586295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.586375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.586633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.586698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.586980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.587043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.587331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.587396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.587682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.587747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.588034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.588097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.588340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.588406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.588697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.588762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.589042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.589106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.589354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.589419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.589662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.589725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.589971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.730 [2024-11-20 10:00:51.590034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.730 qpair failed and we were unable to recover it. 00:27:14.730 [2024-11-20 10:00:51.590244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.590322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.590563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.590627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.590872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.590935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.591214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.591277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.591525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.591588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.591836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.591900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.592087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.592150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.592421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.592486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.592727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.592792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.593061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.593124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.593338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.593406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.593661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.593727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.593990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.594053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.594333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.594399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.594685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.594750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.595035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.595098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.595286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.595368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.595665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.595730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.596003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.596067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.596321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.596387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.596627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.596690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.596898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.596963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.597262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.597361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.597615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.597678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.597880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.597944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.598241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.598324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.598613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.598678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.598872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.598935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.599159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.599223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.599478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.599543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.599811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.599874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.600108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.600172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.600463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.600528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.600775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.600842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.601064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.601132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.601344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.601410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.601650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.601714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:14.731 qpair failed and we were unable to recover it. 00:27:14.731 [2024-11-20 10:00:51.601968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.731 [2024-11-20 10:00:51.602030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.602250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.602336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.602634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.602698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.602950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.603014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.603236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.603300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.603574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.603638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.603897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.603960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.604163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.604226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.604459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.604523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.604788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.604850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.605047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.605114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.605368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.605433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.605641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.605715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.606000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.606065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.606273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.606356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.606645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.606708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.606989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.607052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.607351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.607418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.607713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.607778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.608002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.608064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.608328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.608394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.608682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.608746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.609028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.609091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.609363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.609429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.609701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.609765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.610063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.610126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.610353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.610418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.013 [2024-11-20 10:00:51.610669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.013 [2024-11-20 10:00:51.610736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.013 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.610988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.611052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.611340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.611406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.611696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.611759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.612012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.612076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.612330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.612396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.612690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.612753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.613040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.613105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.613362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.613427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.613713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.613777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.614027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.614091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.614334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.614401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.614638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.614713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.614989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.615053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.615348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.615413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.615671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.615735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.616014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.616079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.616370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.616436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.616671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.616736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.617017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.617081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.617367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.617432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.617658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.617724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.618015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.618080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.618358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.618422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.618676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.618740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.618986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.619050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.619346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.619411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.619706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.619770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.620070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.620134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.620384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.620449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.620684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.620747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.620981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.621046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.621228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.621291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.621585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.621649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.621881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.621945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.622158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.622223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.622503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.622568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.622865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.622929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.014 qpair failed and we were unable to recover it. 00:27:15.014 [2024-11-20 10:00:51.623235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.014 [2024-11-20 10:00:51.623298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.623604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.623668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.623910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.623975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.624264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.624348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.624613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.624676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.624912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.624976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.625210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.625273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.625584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.625648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.625935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.625999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.626297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.626379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.626619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.626684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.626972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.627035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.627237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.627321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.627572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.627636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.627923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.627986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.628280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.628365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.628672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.629002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.629065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.629355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.629421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.629676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.629740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.630025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.630088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.630364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.630429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.630671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.630736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.630970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.631033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.631326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.631392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.631616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.631680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.631929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.631992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.632240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.632318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.632616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.632680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.632988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.633052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.633363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.633430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.633692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.633755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.634015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.634078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.634332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.634398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.634638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.634701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.634936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.634999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.635215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.635279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.015 [2024-11-20 10:00:51.635580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.015 [2024-11-20 10:00:51.635643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.015 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.635836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.635902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.636180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.636244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.636542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.636606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.636896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.636959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.637204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.637279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.637502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.637566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.637848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.637911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.638154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.638217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.638470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.638535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.638748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.638812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.639065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.639128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.639380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.639446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.639729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.639792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.640078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.640141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.640393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.640458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.640658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.640721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.640976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.641039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.641361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.641425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.641722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.641787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.642048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.642112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.642407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.642471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.642711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.642777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.643043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.643107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.643353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.643418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.643696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.643759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.644059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.644123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.644376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.644441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.644737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.644801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.645053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.645117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.645396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.645460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.645729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.645792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.645992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.646065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.646331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.646396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.646646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.646710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.647005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.647067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.647327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:00:51.647394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.016 qpair failed and we were unable to recover it. 00:27:15.016 [2024-11-20 10:00:51.647608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.647671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.647921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.647984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.648206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.648271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.648504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.648569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.648825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.648888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.649086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.649149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.649341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.649407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.649690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.649753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.650043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.650105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.650371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.650436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.650721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.650784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.651037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.651100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.651344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.651410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.651696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.651759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.652046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.652109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.652393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.652458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.652742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.652804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.653051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.653117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.653330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.653398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.653683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.653746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.654005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.654068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.654358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.654424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.654714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.654787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.655068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.655132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.655411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.655477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.655725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.655788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.656065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.656128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.656363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.656429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.656664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.656727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.656980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.657043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.657357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.657422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.657715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.657778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.017 [2024-11-20 10:00:51.658078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:00:51.658141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.017 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.658362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.658427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.658682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.658748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.659000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.659063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.659359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.659426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.659624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.659690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.659954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.660018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.660276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.660355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.660620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.660683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.660886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.660949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.661229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.661293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.661583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.661650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.661933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.661995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.662234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.662297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.662579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.662643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.662928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.662991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.663283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.663365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.663652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.663714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.663987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.664050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.664277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.664362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.664598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.664660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.664898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.664961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.665252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.665333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.665643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.665706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.665980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.666044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.666338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.666403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.666634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.666697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.666936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.666999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.667241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.667318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.667586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.667649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.667897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.667963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.668217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.668282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.668582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.668646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.668927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.668990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.669275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.669367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.669580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.669644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.669890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:00:51.669952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.018 qpair failed and we were unable to recover it. 00:27:15.018 [2024-11-20 10:00:51.670187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.670249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.670472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.670541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.670836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.670900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.671186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.671250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.671508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.671575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.671866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.671931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.672172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.672237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.672516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.672581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.672835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.672899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.673191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.673255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.673568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.673631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.673912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.673975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.674228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.674291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.674596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.674660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.674887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.674950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.675141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.675205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.675458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.675522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.675821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.675884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.676135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.676199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.676433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.676498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.676695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.676759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.677057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.677133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.677427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.677492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.677701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.677765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.678007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.678070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.678369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.678434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.678634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.678697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.678949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.679011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.679239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.679315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.679598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.679662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.679941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.680004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.680297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.680374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.680668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.680732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.681029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.681092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.019 [2024-11-20 10:00:51.681359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.019 [2024-11-20 10:00:51.681425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.019 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.681726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.681791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.682001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.682063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.682334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.682401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.682689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.682753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.682967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.683030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.683323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.683389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.683597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.683661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.683922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.683985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.684218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.684282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.684584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.684650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.684942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.685004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.685256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.685342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.685595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.685659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.685894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.685968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.686270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.686353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.686642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.686706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.686951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.687015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.687317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.687383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.687649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.687712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.687911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.687974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.688256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.688337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.688638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.688700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.689003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.689066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.689424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.689651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.689714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.690005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.690067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.690331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.690396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.690659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.690723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.691003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.691066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.691325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.691389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.691633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.691695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.691899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.691965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.692229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.692294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.692602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.692666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.692949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.693012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.693320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.693386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.693681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.693744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.020 [2024-11-20 10:00:51.694007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.020 [2024-11-20 10:00:51.694070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.020 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.694323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.694388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.694590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.694653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.694934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.694998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.695258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.695343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.695598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.695661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.695916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.695978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.696273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.696354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.696642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.696705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.696957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.697021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.697226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.697290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.697537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.697602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.697830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.697893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.698077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.698141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.698385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.698450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.698649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.698713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.699001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.699065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.699276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.699356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.699637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.699700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.699987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.700051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.700252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.700330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.700591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.700655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.700933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.700997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.701259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.701374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.701654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.701717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.702014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.702076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.702332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.702398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.702642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.702705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.702957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.703020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.703334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.703400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.703671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.703735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.704027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.704091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.704373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.704439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.704674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.704738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.704948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.705012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.705254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.705331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.021 [2024-11-20 10:00:51.705577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.021 [2024-11-20 10:00:51.705642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.021 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.705841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.705904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.706187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.706250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.706503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.706567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.706851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.706915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.707191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.707254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.707527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.707591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.707885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.707949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.708203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.708276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.708595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.708659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.708941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.709005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.709296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.709395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.709685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.709748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.709962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.710027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.710273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.710355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.710605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.710668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.710897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.710961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.711201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.711265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.711548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.711612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.711888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.711951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.712201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.712267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.712555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.712618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.712823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.712886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.713139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.713202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.713480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.713546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.713830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.713894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.714126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.714190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.714509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.714574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.714865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.714928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.715208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.715272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.715541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.715604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.715838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.715901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.716155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.716219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.716554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.716618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.022 [2024-11-20 10:00:51.716900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.022 [2024-11-20 10:00:51.716963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.022 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.717202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.717275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.717581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.717645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.717884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.717950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.718207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.718270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.718536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.718600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.718880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.718943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.719181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.719243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.719521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.719586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.719875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.719938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.720197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.720260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.720579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.720644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.720931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.720995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.721232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.721294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.721571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.721634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.721857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.721921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.722202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.722264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.722534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.722598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.722859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.722923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.723209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.723272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.723539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.723603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.723842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.723906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.724163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.724226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.724492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.724557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.724765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.724829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.725086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.725150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.725436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.725502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.725702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.725766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.726014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.726087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.726286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.726364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.726599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.726663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.726916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.726979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.727261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.727338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.727630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.727693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.727940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.728006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.728319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.728384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.728628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.728693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.728972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.729035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.729280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.023 [2024-11-20 10:00:51.729374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.023 qpair failed and we were unable to recover it. 00:27:15.023 [2024-11-20 10:00:51.729657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.729720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.730011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.730074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.730342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.730407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.730664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.730727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.731019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.731081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.731321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.731385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.731662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.731726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.732013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.732076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.732322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.732386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.732613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.732676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.732931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.732994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.733275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.733351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.733643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.733708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.733954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.734017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.734259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.734340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.734602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.734666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.734896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.734959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.735185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.735252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.735563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.735627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.735863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.735926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.736179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.736243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.736495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.736559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.736810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.736873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.737121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.737187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.737438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.737505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.737747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.737810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.738053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.738119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.738400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.738439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.738594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.738632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.738744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.738812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.739089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.739206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.739425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.739469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.739686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.739754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.739979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.740045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.740270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.740317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.740457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.740497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.740631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.740671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.740832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.024 [2024-11-20 10:00:51.740909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.024 qpair failed and we were unable to recover it. 00:27:15.024 [2024-11-20 10:00:51.741194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.741259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.741511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.741551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.741768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.741807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.741919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.741986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.742239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.742336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.742477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.742524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.742717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.742786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.743078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.743144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.743413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.743453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.743678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.743742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.743985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.744050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.744269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.744316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.744479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.744518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.744700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.744767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.745028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.745096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.745389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.745429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.745629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.745694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.745958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.746024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.746286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.746369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.746570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.746639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.746877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.746916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.747107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.747175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.747430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.747471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.747628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.747667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.747807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.747875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.748118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.748183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.748420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.748459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.748643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.748707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.749004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.749258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.749297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.749435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.749474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.749691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.749756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.750030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.750105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.750384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.750425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.750557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.750633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.750842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.750881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.751028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.751105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.025 [2024-11-20 10:00:51.751331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.025 [2024-11-20 10:00:51.751399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.025 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.751567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.751606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.751724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.751763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.752001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.752067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.752281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.752329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.752489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.752528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.752755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.752822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.753063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.753128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.753390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.753430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.753658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.753723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.753979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.754043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.754297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.754381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.754523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.754563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.754697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.754738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.754966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.755033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.755288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.755368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.755522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.755561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.755715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.755782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.756066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.756134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.756392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.756432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.756565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.756604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.756730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.756769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.757038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.757103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.757384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.757424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.757555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.757629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.757861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.757931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.758231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.758297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.758524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.758564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.758791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.758857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.759089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.759155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.759451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.759520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.759782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.759857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.760113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.760179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.760438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.760504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.760801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.760867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.761131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.761206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.761457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.761524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.761737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.761805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.026 [2024-11-20 10:00:51.762103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.026 [2024-11-20 10:00:51.762169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.026 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.762428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.762498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.762798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.762863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.763117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.763182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.763431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.763498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.763799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.763863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.764147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.764212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.764617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.764685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.764905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.764974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.765201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.765266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.765558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.765626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.765915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.765980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.766254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.766333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.766599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.766665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.766969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.767032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.767251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.767332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.767551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.767618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.767917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.767983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.768213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.768278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.768571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.768635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.768923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.768989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.769235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.769300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.769578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.769644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.769929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.769994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.770277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.770356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.770616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.770681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.770971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.771036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.771294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.771372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.771620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.771685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.771939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.772008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.772271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.772350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.772619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.772684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.772973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.773039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.773271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.773364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.027 [2024-11-20 10:00:51.773623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.027 [2024-11-20 10:00:51.773691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.027 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.773985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.774050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.774343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.774411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.774705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.774782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.775045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.775110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.775360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.775428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.775670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.775739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.775966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.776032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.776290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.776370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.776626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.776691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.776945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.777010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.777243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.777324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.777617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.777683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.777934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.777999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.778212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.778276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.778584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.778649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.778911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.778977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.779214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.779279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.779537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.779607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.779873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.779939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.780182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.780248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.780511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.780578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.780799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.780864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.781172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.781237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.781541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.781607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.781913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.781977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.782279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.782366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.782663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.782730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.782980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.783048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.783322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.783389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.783699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.783764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.784054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.784118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.784362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.784429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.784677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.784743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.784925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.784991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.785293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.785371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.785631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.785699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.785967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.786032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.028 [2024-11-20 10:00:51.786342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.028 [2024-11-20 10:00:51.786408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.028 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.786693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.786757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.787031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.787096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.787352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.787419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.787721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.787785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.788082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.788159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.788443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.788510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.788812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.788877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.789164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.789230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.789551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.789618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.789823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.789892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.790156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.790222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.790474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.790541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.790798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.790863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.791158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.791223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.791526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.791593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.791882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.791946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.792195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.792263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.792539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.792604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.792935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.793000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.793250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.793350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.793625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.793691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.793956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.794021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.794230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.794295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.794619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.794685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.794937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.795004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.795274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.795352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.795576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.795645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.795935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.795999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.796216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.796280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.796536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.796602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.796853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.796917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.797140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.797208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.797520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.797587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.797852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.797917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.798170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.798234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.798518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.798585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.798798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.798863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.029 qpair failed and we were unable to recover it. 00:27:15.029 [2024-11-20 10:00:51.799129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.029 [2024-11-20 10:00:51.799194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.799467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.799532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.799800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.799865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.800116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.800185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.800472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.800538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.800764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.800830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.801120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.801184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.801462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.801546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.801803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.801871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.802120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.802185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.802430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.802496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.802734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.802800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.803053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.803118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.803350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.803416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.803710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.803775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.803994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.804060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.804331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.804397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.804683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.804747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.805045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.805110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.805363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.805434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.805733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.805798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.806070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.806136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.806396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.806463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.806730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.806795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.807044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.807110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.807372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.807438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.807642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.807707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.807970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.808036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.808331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.808398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.808610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.808676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.808878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.808942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.809213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.809277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.809522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.809590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.809883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.809949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.810257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.810346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.810626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.810690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.810957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.811023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.811235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.030 [2024-11-20 10:00:51.811317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.030 qpair failed and we were unable to recover it. 00:27:15.030 [2024-11-20 10:00:51.811579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.811644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.811933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.811999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.812291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.812375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.812685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.812750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.813000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.813067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.813373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.813439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.813702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.813767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.813971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.814036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.814268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.814368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.814639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.814715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.815009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.815074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.815331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.815397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.815649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.815713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.816007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.816072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.816356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.816421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.816674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.816738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.816985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.817050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.817322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.817388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.817641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.817706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.817938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.818003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.818263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.818339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.818581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.818646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.818897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.818962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.819226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.819290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.819606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.819671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.819929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.819994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.820284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.820362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.820634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.820698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.820944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.821010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.821340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.821407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.821706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.031 [2024-11-20 10:00:51.821771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.031 qpair failed and we were unable to recover it. 00:27:15.031 [2024-11-20 10:00:51.822064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.822129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.822384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.822450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.822714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.822778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.823081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.823145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.823442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.823508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.823825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.823890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.824157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.824221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.824497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.824561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.824806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.824874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.825127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.825192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.825457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.825524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.825813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.825879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.826123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.826191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.826458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.826526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.826781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.826847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.827132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.827197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.827510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.827576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.827839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.827904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.828168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.828245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.828569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.828635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.828852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.828918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.829163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.829229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.829463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.829531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.829817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.829882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.830169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.830233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.830552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.830618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.830839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.830906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.831159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.831225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.831482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.831552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.831810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.831875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.832121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.832187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.832436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.832504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.832769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.832838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.833100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.833168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.833472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.833539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.833794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.032 [2024-11-20 10:00:51.833860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.032 qpair failed and we were unable to recover it. 00:27:15.032 [2024-11-20 10:00:51.834108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.834176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.834408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.834474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.834724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.834792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.835039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.835106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.835352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.835419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.835723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.835788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.836034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.836098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.836326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.836393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.836686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.836750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.837016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.837081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.837341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.837407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.837702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.837767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.838071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.838135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.838384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.838451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.838713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.838777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.839032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.839098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.839347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.839415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.839704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.839769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.840015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.840085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.840335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.840401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.840690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.840754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.841020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.841085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.841388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.841467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.841697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.841763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.842054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.842119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.842418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.842484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.842779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.842844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.843135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.843200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.843458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.843524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.843775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.843844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.844151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.844217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.844449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.844514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.844760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.844828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.845053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.845122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.845383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.845448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.845739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.845804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.846113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.846178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.846424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.033 [2024-11-20 10:00:51.846490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.033 qpair failed and we were unable to recover it. 00:27:15.033 [2024-11-20 10:00:51.846747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.846813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.847117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.847182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.847446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.847513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.847768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.847834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.848054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.848122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.848390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.848457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.848761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.848826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.849111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.849177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.849443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.849510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.849760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.849826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.850111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.850177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.850415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.850482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.850733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.850799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.851002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.851066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.851354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.851422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.851634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.851700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.851964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.852029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.852236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.852319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.852582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.852648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.852947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.853011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.853275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.853353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.853607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.853672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.853917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.853983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.854249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.854327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.854537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.854613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.854856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.854922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.855175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.855240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.855515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.855581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.855782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.855849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.856140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.856206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.856466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.856533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.856772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.856839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.857140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.857205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.857483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.857551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.857808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.857874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.858164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.858231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.858539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.858606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.858847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.034 [2024-11-20 10:00:51.858912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.034 qpair failed and we were unable to recover it. 00:27:15.034 [2024-11-20 10:00:51.859199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.859265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.859583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.859649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.859894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.859959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.860175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.860242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.860447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.860480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.860587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.860620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.860764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.860796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.861015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.861083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.861345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.861398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.861528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.861560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.861822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.861887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.862177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.862242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.862509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.862542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.862782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.862850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.863117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.863183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.863377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.863410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.863520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.863553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.863713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.863746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.863911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.863976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.864230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.864298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.864502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.864535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.864652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.864719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3850527 Killed "${NVMF_APP[@]}" "$@" 00:27:15.035 [2024-11-20 10:00:51.864935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.865002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.865300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.865382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.865502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.865537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.865754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.865837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:15.035 [2024-11-20 10:00:51.866071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.866136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:15.035 [2024-11-20 10:00:51.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.866425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.035 [2024-11-20 10:00:51.866524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.866556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.035 [2024-11-20 10:00:51.866655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.866688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.866804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.866838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.866987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.867020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.867201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.867266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.867452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.867487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.867671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.035 [2024-11-20 10:00:51.867707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.035 qpair failed and we were unable to recover it. 00:27:15.035 [2024-11-20 10:00:51.867838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.867871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.868011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.868044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.868246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.868282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.868423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.868457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.868564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.868599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.868794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.868860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.869060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.869126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.869376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.869410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.869520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.869553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.869696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.869729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.869909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.869944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.870177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.870243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.870458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.870575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.870607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.870754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.870787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.871052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.871136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.871272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.871312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.871422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.871455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3851056 00:27:15.036 [2024-11-20 10:00:51.871595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:15.036 [2024-11-20 10:00:51.871630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3851056 00:27:15.036 [2024-11-20 10:00:51.871764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.871798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.871894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3851056 ']' 00:27:15.036 [2024-11-20 10:00:51.871970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.036 [2024-11-20 10:00:51.872264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.872368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.036 [2024-11-20 10:00:51.872503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.872536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.036 [2024-11-20 10:00:51.872671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 10:00:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.036 [2024-11-20 10:00:51.872704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.872861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.872924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.873222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.873298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.873456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.873486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.873612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.036 [2024-11-20 10:00:51.873645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.036 qpair failed and we were unable to recover it. 00:27:15.036 [2024-11-20 10:00:51.873765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.873799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.874015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.874080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.874342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.874373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.874478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.874508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.874640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.874671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.874808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.874839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.874966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.874997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.875110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.875141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.875339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.875369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.875473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.875529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.875693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.875726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.875848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.875881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.876023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.876055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.876283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.876337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.876494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.876525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.876656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.876686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.876793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.876824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.876930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.876962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.877077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.877107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.877211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.877241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.877347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.877377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.877506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.877537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.877672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.877702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.877864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.877894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.877988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.878020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.878152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.878182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.878318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.878348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.878445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.878477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.878635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.878666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.878794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.878824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.878975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.879005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.879134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.879164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.879294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.879331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.879423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.879453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.879566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.879597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.879730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.879760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.879926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.879973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.037 qpair failed and we were unable to recover it. 00:27:15.037 [2024-11-20 10:00:51.880093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.037 [2024-11-20 10:00:51.880126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.880232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.880264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.880406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.880437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.880549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.880578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.880686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.880718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.880852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.880883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.880986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.881016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.881129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.881160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.881311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.881356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.881480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.881512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.881634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.881665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.881787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.881839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.881947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.881986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.882099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.882131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.882282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.882321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.882423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.882454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.882572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.882605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.882706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.882738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.882874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.882906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.883051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.883087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.883226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.883259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.883384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.883417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.883530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.883561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.883699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.883732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.883836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.883866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.883999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.884048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.884234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.884268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.884389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.884421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.884558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.884589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.885670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.885709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.885866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.885902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.886723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.886763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.886924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.886959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.888026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.888065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.888236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.888271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.888410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.888445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.888562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.888594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.888778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.038 [2024-11-20 10:00:51.888811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.038 qpair failed and we were unable to recover it. 00:27:15.038 [2024-11-20 10:00:51.888967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.888999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.889136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.889173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.889292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.889334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.889439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.889471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.889588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.889634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.889733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.889762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.889870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.889899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.890030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.890060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.890195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.890223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.890332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.890362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.890465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.890511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.890631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.890676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.890819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.890853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.890962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.890992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.891124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.891154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.891254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.891285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.891396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.891426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.891531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.891562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.891690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.891721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.891877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.891907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.892038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.892068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.892174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.892206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.892317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.892348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.892457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.892488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.892633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.892664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.892820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.892850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.893015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.893045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.893142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.893172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.893290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.893328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.893424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.893455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.893566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.893597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.893729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.893759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.893915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.893946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.894044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.894073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.894202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.894232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.894333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.894364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.894465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.894496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.894593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.039 [2024-11-20 10:00:51.894624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.039 qpair failed and we were unable to recover it. 00:27:15.039 [2024-11-20 10:00:51.894729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.894760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.894883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.894913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.895013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.895044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.895190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.895226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.895376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.895408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.895515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.895546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.895710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.895741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.895850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.895881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.896039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.896070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.896208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.896238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.896354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.896385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.896524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.896554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.896689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.896719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.896852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.896883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.897025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.897056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.897184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.897214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.897318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.897349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.897491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.897521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.897656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.897686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.897820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.897851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.897946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.897976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.898091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.898135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.898237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.898268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.898408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.898439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.898532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.898563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.898685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.898714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.898815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.898847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.898952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.898985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.899089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.899119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.899256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.899286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.899402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.899433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.899536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.899565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.899659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.899689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.899788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.899836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.899949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.899980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.900141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.900173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.900327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.900367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.900489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.040 [2024-11-20 10:00:51.900522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.040 qpair failed and we were unable to recover it. 00:27:15.040 [2024-11-20 10:00:51.900629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.900661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.900839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.900899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.901070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.901154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.901321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.901354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.901493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.901526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.901653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.901688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.901813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.901864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.902095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.902147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.902317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.902354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.041 [2024-11-20 10:00:51.902484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.041 [2024-11-20 10:00:51.902517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.041 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.902657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.902712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.902915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.902949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.903134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.903212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.907317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.907354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.907461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.907491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.907626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.907656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.907789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.907817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.907911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.907938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.908036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.908063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.908165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.908193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.908285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.908326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.908451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.908514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.908660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-11-20 10:00:51.908717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-11-20 10:00:51.908877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.908936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.909055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.909093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.909246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.909284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.909422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.909461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.909584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.909628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.909779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.909818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.909964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.909993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.910126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.910154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.910308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.910337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.910462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.910493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.910614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.910646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.910762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.910789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.910906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.910934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.911031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.911069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.911210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.911237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.911332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.911360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.911481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.911508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.911611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.911647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.911727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.911753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.911852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.911889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.912049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.912076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.912196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.912228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.912321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.912350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.912495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.912522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.912656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.912684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.912813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.912840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.912960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.912987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.913073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.913101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.913233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.913260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.913395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.913422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.913517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.913554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.913675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.913713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.913833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.913870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-11-20 10:00:51.914062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-11-20 10:00:51.914088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.914215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.914241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.914362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.914389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.914585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.914618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.914827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.914854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.914984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.915011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.915132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.915159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.915285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.915317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.915407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.915439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.915549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.915575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.915725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.915752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.915848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.915874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.915994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.916025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.916114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.916141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.916267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.916294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.916422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.916453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.916541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.916568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.916654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.916690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.916778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.916804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.916896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.916932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.917015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.917041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.917172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.917200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.917339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.917367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.917461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.917497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.917619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.917654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.917774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.917805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.917949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.917986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.918073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.918101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.918223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.918250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.918405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.918432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.918551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.918578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.918706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.918733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.918829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.918855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.918952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.918984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.919156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-11-20 10:00:51.919195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-11-20 10:00:51.919321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.919350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.919458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.919486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.919635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.919662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.919774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.919800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.919922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.919949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.920051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.920080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.920202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.920229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.920352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.920379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.920492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.920524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.920655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.920683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.920802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.920829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.920921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.920948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.921093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.921120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.921224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.921251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.921378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.921405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.921486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.921512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.921648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.921675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.921769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.921795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.921912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.921938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.922039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.922074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.922190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.922217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.922336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.922363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.922471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.922498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.922590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.922626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.922713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.922740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.922886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.922913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.923038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.923064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.923205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.923252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.923395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.923437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.923623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.923671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.923710] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:27:15.372 [2024-11-20 10:00:51.923790] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.372 [2024-11-20 10:00:51.923842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.923881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.924049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.924086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.924229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.924258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-11-20 10:00:51.924363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-11-20 10:00:51.924393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.924492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.924519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.924633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.924659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.924773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.924799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.924887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.924913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.924999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.925025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.925134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.925160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.925244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.925271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.925358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.925386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.925528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.925555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.925639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.925666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.925780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.925808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.925896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.925922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.926016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.926042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.926178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.926223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.926347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.926386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.926489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.926517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.926760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.926787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.926909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.926934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.927054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.927080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.927175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.927202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.927323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.927351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.927467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.927493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.927585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.927614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.927706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.927733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.927817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.927843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.927965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.927993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.928104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.928142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.928225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.928251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.928371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.928398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.928485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.928511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.928628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.928654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.928750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.928778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.928921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.928947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.929034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.929061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.929175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.929202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.929291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.929325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.929410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-11-20 10:00:51.929436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-11-20 10:00:51.929525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.929552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.929665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.929691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.929775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.929802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.929896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.929923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.930034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.930061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.930177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.930203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.930322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.930350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.930458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.930484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.930570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.930596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.930718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.930748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.930862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.930889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.930998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.931024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.931112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.931140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.931230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.931256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.931376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.931404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.931517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.931544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.931647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.931673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.931761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.931786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.931898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.931924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.932015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.932042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.932154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.932181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.932269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.932296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.932392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.932418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.932510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.932536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.932639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.932667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.932779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.932805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.932919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.932946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.933054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.933080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.933206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.933233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.933349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.933385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-11-20 10:00:51.933504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-11-20 10:00:51.933531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.933723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.933749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.933947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.933974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.934087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.934115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.934204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.934229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.934361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.934389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.934478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.934505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.934623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.934649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.934742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.934769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.934879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.934905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.935046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.935072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.935160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.935187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.935279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.935313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.935437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.935463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.935559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.935585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.935685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.935711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.935793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.935819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.935960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.935986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.936066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.936092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.936198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.936224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.936338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.936365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.936451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.936480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.936602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.936629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.936746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.936773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.936856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.936882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.936996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.937115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.937230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.937365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.937479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.937595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.937704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.937842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.937954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.937981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.938067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.938093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.938212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.938239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.938370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.938399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-11-20 10:00:51.938483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-11-20 10:00:51.938510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.938628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.938655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.938767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.938794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.938930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.938957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.939045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.939071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.939211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.939238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.939364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.939391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.939506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.939533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.939631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.939657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.939802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.939828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.939941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.939967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.940058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.940087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.940165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.940191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.940283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.940327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.940433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.940460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.940575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.940611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.940755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.940782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.940897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.940923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.941045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.941071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.941157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.941183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.941296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.941330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.941446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.941472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.941594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.941620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.941700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.941726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.941801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.941827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.941941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.941967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.942072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.942099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.942207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.942233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.942384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.942411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.942529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.942559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.942674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.942701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.942793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.942819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.942935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.942962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-11-20 10:00:51.943051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-11-20 10:00:51.943077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.943192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.943219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.943317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.943346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.943442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.943469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.943581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.943616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.943756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.943783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.943924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.943950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.944034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.944060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.944180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.944208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.944309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.944338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.944431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.944457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.944611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.944637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.944746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.944772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.944884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.944910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.945022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.945049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.945179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.945207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.945296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.945329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.945419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.945448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.945545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.945572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.945660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.945687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.945773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.945799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.945916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.945944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.946060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.946087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.946235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.946263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.946390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.946416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.946501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.946527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.946635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.946662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.946779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.946805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.946945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.946972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.947058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.947085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.947203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.947229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.947333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.947360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.947458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.947485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.947606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.947632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.947757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.947784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-11-20 10:00:51.947879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-11-20 10:00:51.947907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.947996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.948027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.948144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.948171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.948286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.948325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.948463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.948490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.948580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.948607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.948693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.948719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.948796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.948823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.948907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.948933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.949044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.949071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.949152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.949178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.949292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.949325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.949442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.949471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.949614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.949641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.949752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.949779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.949900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.949928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.950016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.950042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.950166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.950193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.950280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.950312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.950430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.950456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.950535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.950561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.950676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.950702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.950816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.950841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.950929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.950957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.951071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.951098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.951181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.951207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.951326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.951353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.951471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.951497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.951588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.951615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.951720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.951747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.951836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.951864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.951984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.952011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.952101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.952127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.952270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.952296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.952419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.952446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.952585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.952611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.952703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-11-20 10:00:51.952731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-11-20 10:00:51.952848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.952874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.953014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.953040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.953155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.953181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.953262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.953288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.953408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.953439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.953531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.953559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.953667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.953694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.953825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.953866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.953962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.953990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.954081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.954108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.954221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.954247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.954339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.954366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.954481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.954508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.954597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.954630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.954772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.954798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.954931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.954958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.955079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.955105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.955229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.955256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.955429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.955457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.955549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.955575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.955671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.955698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.955842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.955868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.955958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.955984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.956090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.956116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.956232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.956258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.956373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.956403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.956523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.956550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.956644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.956671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.956753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.956779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.956888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-11-20 10:00:51.956914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-11-20 10:00:51.957041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.957068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.957216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.957244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.957382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.957421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.957511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.957538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.957627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.957653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.957732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.957758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.957855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.957882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.957996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.958023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.958141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.958168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.958258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.958285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.958403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.958429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.958620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.958648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.958786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.958813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.958926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.958953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.959051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.959082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.959226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.959252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.959347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.959374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.959466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.959492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.959607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.959633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.959716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.959742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.959828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.959947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.959973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.960086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.960126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.960216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.960243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.960337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.960363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.960474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.960500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.960586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.960612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.960746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.960848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.960875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.960998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.961026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.961143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.961169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.961291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.961323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.961422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.961448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.961542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.961568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.961658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.961686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.961801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-11-20 10:00:51.961827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-11-20 10:00:51.961943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.961969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.962058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.962084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.962180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.962208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.962297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.962329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.962417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.962445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.962533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.962560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.962683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.962710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.962796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.962822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.962937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.962964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.963050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.963076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.963194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.963223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.963338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.963367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.963445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.963471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.963559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.963586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.963704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.963730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.963851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.963878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.963964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.963991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.964133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.964159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.964242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.964273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.964391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.964420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.964551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.964590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.964723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.964750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.964837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.964863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.964952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.964977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.965064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.965090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.965172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.965198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.965287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.965321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.965422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.965447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.965566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.965594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.965708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.965734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.965823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.965851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.965959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.965986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.966099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-11-20 10:00:51.966126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-11-20 10:00:51.966245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.966271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.966377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.966405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.966518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.966544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.966642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.966668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.966753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.966780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.966894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.966920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.967005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.967031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.967142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.967168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.967263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.967309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.967441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.967469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.967550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.967577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.967671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.967697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.967787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.967813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.967925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.967952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.968044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.968070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.968160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.968185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.968280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.968322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.968450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.968476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.968561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.968587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.968677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.968703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.968798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.968825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.968913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.968940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.969024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.969050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.969143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.969169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.969363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.969390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.969481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.969511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.969601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.969627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.969750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.969777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.969867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.969893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.969983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.970009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.970097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.970124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.970216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.970243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.970375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.970414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.970534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.970562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.970657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-11-20 10:00:51.970683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-11-20 10:00:51.970770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.970796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.970877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.970903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.971016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.971042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.971136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.971163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.971258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.971284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.971437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.971463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.971577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.971605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.971691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.971717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.971829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.971855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.971968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.971997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.972118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.972145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.972228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.972254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.972384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.972411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.972504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.972530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.972619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.972645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.972761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.972789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.972906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.972932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.973034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.973074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.973168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.973196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.973340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.973368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.973463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.973491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.973586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.973613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.973710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.973737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.973849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.973878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.973989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.974017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.974133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.974159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.974267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.974293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.974395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.974421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.974525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.974551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.974637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.974663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.974742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.974773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.974887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.974915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.975001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.975028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.975149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.975177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.975290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.975321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.975452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-11-20 10:00:51.975479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-11-20 10:00:51.975596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.975622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.975708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.975735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.975821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.975847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.975958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.975984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.976097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.976123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.976215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.976243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.976371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.976399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.976478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.976504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.976618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.976644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.976754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.976780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.976896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.976922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.977015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.977043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.977133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.977162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.977245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.977271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.977421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.977448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.977533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.977559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.977656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.977683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.977802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.977828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.977984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.978012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.978094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.978120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.978203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.978228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.978372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.978401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.978494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.978520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.978612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.978638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.978778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.978805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.978914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.978940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.979025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.979052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.979137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.979164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.979249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.979275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.979395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-11-20 10:00:51.979421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-11-20 10:00:51.979509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.979536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.979622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.979647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.979762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.979788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.979879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.979905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.979990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.980021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.980150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.980177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.980321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.980348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.980438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.980464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.980550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.980576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.980690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.980716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.980807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.980833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.980974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.981000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.981144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.981184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.981277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.981310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.981439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.981466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.981578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.981605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.981727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.981753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.981840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.981867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.981957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.981985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.982076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.982102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.982218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.982245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.982374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.982402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.982484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.982510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.982635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.982675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.982765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.982793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.982879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.982906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.983010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.983038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.983139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.983166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.983250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.983278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.983396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.983424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.983514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.983543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.983647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.983674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.983789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.983816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.983900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.983926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.984005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-11-20 10:00:51.984031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-11-20 10:00:51.984145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.984172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.984261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.984288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.984417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.984444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.984527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.984553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.987447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.987475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.987601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.987628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.987717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.987744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.987830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.987858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.987946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.987974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.988083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.988117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.988206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.988232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.988346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.988375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.988465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.988491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.988608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.988636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.988759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.988785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.988866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.988892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.988971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.988997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.989100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.989127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.989206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.989231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.989367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.989394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.989502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.989528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.989649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.989675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.989787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.989813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.989909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.989936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.990016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.990042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.990140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.990167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.990246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.990271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.990375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.990401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.990485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.990510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.990591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.990618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.990770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.990796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.990884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.990910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.991002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.991027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.991135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.991161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.991246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.991273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-11-20 10:00:51.991397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-11-20 10:00:51.991424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.991520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.991559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.991674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.991703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.991787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.991814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.991905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.991932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.992049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.992076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.992162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.992189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.992270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.992297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.992413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.992440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.992554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.992580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.992683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.992709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.992822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.992848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.992929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.992954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.993044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.993070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.993169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.993214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.993331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.993359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.993479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.993507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.993658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.993686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.993781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.993807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.993899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.993927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.994014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.994042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.994129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.994155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.994245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.994272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.994375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.994403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.994493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.994519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.994620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.994648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.994763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.994789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.994904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.994931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.995963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-11-20 10:00:51.995990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-11-20 10:00:51.996119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.996159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.996264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.996293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.996418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.996445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.996555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.996582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.996716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.996746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.996889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.996915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.997027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.997054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.997144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.997171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.997284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.997316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.997458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.997484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.997592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.997618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.997725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.997750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.997833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.997859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.997947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.997976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.998961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.998989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.999071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.999098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.999231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.999257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.999374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.999401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.999486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.999513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.999633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.999659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.999745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.999771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.999860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:51.999887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:51.999972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:52.000000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:52.000090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:52.000117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:52.000205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:52.000232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:52.000370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-11-20 10:00:52.000396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-11-20 10:00:52.000413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.388 [2024-11-20 10:00:52.000475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.000500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.000584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.000610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.000704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.000732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.000821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.000848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.000941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.000967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.001047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.001073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.001165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.001191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.001339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.001366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.001485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.001512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.001616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.001645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.001758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.001785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.001901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.001928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.002015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.002041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.002125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.002153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.002273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.002300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.002427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.002453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.002540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.002567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.002683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.002709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.002788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.002815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.002942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.002967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.003076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.003103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.003221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.003247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.003359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.003385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.003466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.003497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.003582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.003608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.003695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.003722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.003841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.003867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.003960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.003987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.004081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.004107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.004198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.004224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.004343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-11-20 10:00:52.004371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-11-20 10:00:52.004458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.004484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.004569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.004595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.004695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.004721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.004838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.004864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.004974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.005014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.005119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.005148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.005243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.005270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.005403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.005432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.005533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.005559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.005679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.005705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.005821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.005849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.005938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.005964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.006056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.006082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.006171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.006198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.006322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.006349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.006471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.006497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.006630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.006657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.006768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.006794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.006881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.006909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.007009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.007048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.007150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.007190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.007321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.007350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.007449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.007476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.007569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.007606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.007717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.007743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.007835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.007861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.007976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.008001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.008116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.008142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.008226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.008252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.008416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.008442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.008528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.008553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.008652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.008680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.008817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.008848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.008932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.008957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.009043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.009069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-11-20 10:00:52.009165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-11-20 10:00:52.009196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.009336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.009364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.009487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.009514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.009641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.009675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.009764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.009792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.009887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.009914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.010002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.010029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.010127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.010153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.010249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.010288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.010398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.010426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.010566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.010601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.010734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.010760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.010896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.010925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.011015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.011042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.011149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.011189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.011288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.011321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.011442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.011470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.011567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.011599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.011681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.011708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.011826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.011852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.011940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.011968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.012071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.012101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.012221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.012249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.012376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.012403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.012510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.012536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.012636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.012664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.012751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.012778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.012871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.012898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.013013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.013040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.013156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.013183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.013275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.013315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.013414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.013440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.013523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.013550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.013649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.013675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.013788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-11-20 10:00:52.013814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-11-20 10:00:52.013932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.013960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.014062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.014088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.014174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.014204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.014318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.014344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.014425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.014452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.014542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.014568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.014685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.014711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.014830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.014856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.014996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.015022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.015135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.015162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.015312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.015352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.015444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.015472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.015564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.015590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.015711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.015737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.015856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.015883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.015970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.015997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.016094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.016120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.016231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.016257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.016365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.016394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.016479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.016507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.016594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.016619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.016734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.016760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.016875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.016902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.016993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.017019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.017103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.017129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.017230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.017270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.017403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.017452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.017586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.017613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.017765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.017792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.017893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.017920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.018007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.018034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.018155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.018182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.018271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.018297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.018403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.018429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-11-20 10:00:52.018513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-11-20 10:00:52.018538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.018665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.018691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.018784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.018810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.018929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.018958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.019079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.019106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.019220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.019247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.019354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.019382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.019471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.019497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.019592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.019625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.019769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.019795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.019910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.019936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.020023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.020050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.020157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.020182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.020299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.020333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.020453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.020479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.020590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.020615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.020701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.020726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.020821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.020847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.020960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.020987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.021071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.021097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.021220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.021323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.021351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.021449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.021475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.021593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.021620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.021705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.021732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.021850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.021876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.022012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.022039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.022147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.022173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.022254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.022279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.022396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.022441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.022575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.022603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.022718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.022743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.022837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.022863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.022956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.022982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.023071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.023096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-11-20 10:00:52.023210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-11-20 10:00:52.023238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.023349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.023376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.023464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.023491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.023581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.023607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.023726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.023752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.023864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.023890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.023998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.024024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.024108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.024135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.024221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.024247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.024372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.024399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.024487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.024513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.024631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.024658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.024782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.024808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.024948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.024983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.025097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.025124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.025214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.025240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.025331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.025358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.025440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.025466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.025573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.025600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.025689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.025716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.025832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.025858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.025971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.025997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.026108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.026134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.026261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.026301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.026413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.026440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.026536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.026563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.026692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.026719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.026840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.026867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.026958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.026985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.027096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.027123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.027241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-11-20 10:00:52.027267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-11-20 10:00:52.027373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.027401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.027485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.027512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.027640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.027668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.027753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.027779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.027863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.027891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.027982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.028010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.028127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.028153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.028246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.028273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.028365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.028392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.028485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.028511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.028611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.028637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.028725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.028751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.028865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.028892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.028978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.029110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.029249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.029408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.029517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.029633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.029740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.029860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.029967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.029993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.030129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.030160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.030275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.030310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.030402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.030429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.030518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.030543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.030668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.030696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.030808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.030834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.030947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.030973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.031068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.031096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.031184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.031212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.031330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.031370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.031488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.031515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.031640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.031666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.031749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.031776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-11-20 10:00:52.031856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-11-20 10:00:52.031882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.032029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.032139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.032280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.032397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.032515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.032652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.032767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.032886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.032979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.033006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.033085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.033111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.033204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.033231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.033335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.033362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.033495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.033521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.033675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.033702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.033815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.033841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.033926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.033952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.034065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.034090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.034180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.034206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.034289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.034326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.034410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.034439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.034536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.034576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.034667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.034695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.034804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.034831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.034912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.034939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.035020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.035046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.035126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.035153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.035247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.035279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.035396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.035422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.035505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.035531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.035675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.035701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.035788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.035813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.035904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.035930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.036045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.036072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.036162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.036190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.036280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-11-20 10:00:52.036310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-11-20 10:00:52.036397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.036422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.036515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.036540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.036634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.036659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.036798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.036824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.036936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.036963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.037062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.037088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.037183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.037211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.037338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.037365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.037474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.037500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.037585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.037612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.037692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.037719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.037811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.037838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.037932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.037958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.038050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.038078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.038170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.038196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.038312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.038338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.038424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.038450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.038539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.038566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.038662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.038690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.038777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.038804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.038945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.038972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.039065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.039091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.039176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.039202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.039318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.039346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.039423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.039450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.039548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.039575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.039687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.039714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.039829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.039855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.039946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.039973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.040115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.040141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.040225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.040252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.040387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.040418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.040531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.040558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.040657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.040683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.040793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.040821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-11-20 10:00:52.040913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-11-20 10:00:52.040939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.041026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.041054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.041168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.041195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.041333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.041360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.041450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.041477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.041602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.041628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.041741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.041767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.041880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.041907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.041993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.042019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.042115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.042155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.042289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.042325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.042439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.042466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.042591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.042618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.042713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.042740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.042833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.042861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.042973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.042999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.043107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.043146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.043248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.043275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.043384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.043412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.043497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.043524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.043605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.043631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.043718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.043746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.043887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.043913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.043999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.044025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.044148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.044176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.044316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.044344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.044437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.044464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.044552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.044578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.044715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.044741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.044833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.044860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.044948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.044975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.045069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.045097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.045242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.045270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.045365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.045392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.045480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-11-20 10:00:52.045507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-11-20 10:00:52.045587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.045614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.045725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.045756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.045844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.045871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.045992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.046018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.046169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.046210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.046318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.046358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.046487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.046516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.046605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.046632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.046717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.046744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.046860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.046886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.046975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.047002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.047112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.047152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.047282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.047316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.047406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.047433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.047527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.047555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.047658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.047685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.047775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.047802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.047887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.047913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.047995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.048020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.048103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.048129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.048248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.048277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.048381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.048409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.048503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.048530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.048640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.048667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.048781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.048807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.399 qpair failed and we were unable to recover it. 00:27:15.399 [2024-11-20 10:00:52.048884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.399 [2024-11-20 10:00:52.048911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.049020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.049047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.049151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.049177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.049260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.049289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.049420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.049447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.049535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.049561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.049652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.049678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.049768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.049794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.049905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.049931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.050058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.050084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.050221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.050375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.050403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.050522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.050550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.050645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.050671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.050756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.050783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.050890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.050917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.051030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.051060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.051173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.051199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.051315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.051344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.051432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.051460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.051541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.051567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.051683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.051709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.051789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.051816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.051902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.051929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.052043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.052069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.052176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.052215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.052343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.052372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.052482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.052509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.052633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.052659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.052748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.052774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.052866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.052894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.052986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.053014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.053144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.053171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.053285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.053317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.053405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.400 [2024-11-20 10:00:52.053431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.400 qpair failed and we were unable to recover it. 00:27:15.400 [2024-11-20 10:00:52.053537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.053563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.053679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.053706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.053823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.053850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.053938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.053964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.054051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.054077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.054168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.054197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.054291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.054323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.054442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.054469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.054559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.054587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.054712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.054740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.054829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.054858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.054945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.054971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.055061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.055087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.055172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.055198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.055312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.055338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.055416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.055441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.055528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.055555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.055668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.055694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.055767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.055793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.055901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.055929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.056017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.056046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.056133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.056164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.056278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.056310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.056396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.056423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.056540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.056566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.056656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.056683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.056780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.056806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.056916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.056942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.057036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.057061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.057145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.057171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.057257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.057283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.057409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.057434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.057519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.057545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.057653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.401 [2024-11-20 10:00:52.057679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.401 qpair failed and we were unable to recover it. 00:27:15.401 [2024-11-20 10:00:52.057779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.057807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.057898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.057924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.058038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.058064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.058152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.058178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.058296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.058334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.058415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.058442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.058535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.058563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.058669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.058695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.058803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.058829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.058914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.058942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.059097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.059137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.059237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.059266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.059395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.059424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.059536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.059563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.059673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.059705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.059787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.059814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.059897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.059925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.060963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.060989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.061105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.061131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.061243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.061269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.061424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.061454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.061573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.061601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.061693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.061720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.061832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.061859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.061940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.061966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.062057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.062082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.062171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.062198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.402 qpair failed and we were unable to recover it. 00:27:15.402 [2024-11-20 10:00:52.062282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.402 [2024-11-20 10:00:52.062315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.062411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.062437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.062527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.062552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.062635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.062661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.062742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.062768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.062856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.062884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.062975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 [2024-11-20 10:00:52.063791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.403 [2024-11-20 10:00:52.063841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.403 [2024-11-20 10:00:52.063853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.403 [2024-11-20 10:00:52.063868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.403 [2024-11-20 10:00:52.063869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.063896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.063977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.064002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.064092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.064120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.064204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.064235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.064350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.064377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.064483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.064509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.064605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.064633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.064771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.064798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.064914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.064940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.065030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.065057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.065150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.065177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.065260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.065288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.065390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.065417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.065504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.065459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:15.403 [2024-11-20 10:00:52.065530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.065513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:15.403 [2024-11-20 10:00:52.065563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:15.403 [2024-11-20 10:00:52.065623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.065566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:15.403 [2024-11-20 10:00:52.065652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.065741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.065773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.065890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.065916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.066008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.403 [2024-11-20 10:00:52.066036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.403 qpair failed and we were unable to recover it. 00:27:15.403 [2024-11-20 10:00:52.066136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.066163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.066268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.066294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.066390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.066416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.066505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.066531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.066622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.066647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.066737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.066763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.066856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.066884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.066972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.066998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.067110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.067136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.067223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.067249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.067376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.067403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.067524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.067552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.067661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.067687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.067775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.067802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.067892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.067920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.068038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.068171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.068283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.068409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.068544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.068658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.068775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.068892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.068973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.069000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.069088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.069115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.069207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.069233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.069331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.069358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.069446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.069473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.069558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.069584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.069668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.069694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.069804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.404 [2024-11-20 10:00:52.069830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.404 qpair failed and we were unable to recover it. 00:27:15.404 [2024-11-20 10:00:52.069911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.069936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.070047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.070087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.070208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.070237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.070335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.070363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.070452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.070479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.070591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.070617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.070703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.070735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.070816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.070842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.070931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.070960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.071964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.071990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.072086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.072113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.072201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.072230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.072345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.072373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.072459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.072487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.072577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.072604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.072685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.072711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.072796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.072824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.072915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.072943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.073021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.073047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.073138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.073165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.073259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.073286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.073399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.073438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.073554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.073582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.073671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.073697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.073792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.073818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.073906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.073933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.074017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.074046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.405 [2024-11-20 10:00:52.074153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.405 [2024-11-20 10:00:52.074181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.405 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.074270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.074298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.074390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.074417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.074506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.074533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.074652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.074678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.074764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.074791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.074874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.074902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.075045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.075171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.075321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.075426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.075551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.075672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.075780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.075922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.075999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.076107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.076222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.076335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.076460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.076577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.076699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.076817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.076957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.076983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.077096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.077123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.077216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.077242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.077338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.077368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.077452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.077479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.077590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.077617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.077696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.077722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.077809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.077837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.077915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.077941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.078024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.078051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.078166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.078193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.078273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.406 [2024-11-20 10:00:52.078299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.406 qpair failed and we were unable to recover it. 00:27:15.406 [2024-11-20 10:00:52.078398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.078424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.078509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.078537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.078648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.078674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.078761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.078788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.078874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.078902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.078996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.079023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.079141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.079167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.079252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.079278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.079375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.079401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.079512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.079538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.079636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.079663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.079749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.079775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.079857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.079883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.080022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.080062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.080159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.080188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.080282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.080316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.080415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.080446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.080558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.080584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.080697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.080724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.080815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.080843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.080931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.080957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.081056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.081096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.081196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.081223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.081329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.081357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.081445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.081472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.081558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.081584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.081667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.081693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.081785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.081814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.081897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.081924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.082042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.082069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.082162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.082190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.082283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.082315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.082406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.082432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.082519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.082547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.082662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.082689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.407 [2024-11-20 10:00:52.082779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.407 [2024-11-20 10:00:52.082806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.407 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.082889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.082917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.083970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.083996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.084092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.084119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.084207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.084234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.084333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.084361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.084473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.084512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.084604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.084631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.084721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.084752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.084839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.084866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.084956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.084983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.085097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.085123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.085213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.085240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.085330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.085362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.085442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.085470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.085557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.085583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.085699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.085726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.085810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.085836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.085923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.085949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.086058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.086084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.086167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.086194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.086335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.086361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.086442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.086468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.086553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.086580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.086662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.086688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.086770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.086796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.086881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.086908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.087015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.408 [2024-11-20 10:00:52.087056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.408 qpair failed and we were unable to recover it. 00:27:15.408 [2024-11-20 10:00:52.087154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.087181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.087261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.087287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.087379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.087406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.087497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.087523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.087604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.087631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.087731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.087759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.087847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.087954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.087981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.088090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.088117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.088205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.088232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.088337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.088364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.088492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.088518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.088645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.088672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.088788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.088814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.088905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.088933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.089073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.089100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.089199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.089227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.089332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.089360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.089448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.089475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.089591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.089617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.089706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.089732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.089815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.089841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.089964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.089992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.090075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.090101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.090183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.090210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.090296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.090336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.090421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.090450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.090586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.090613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.090728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.090755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.090844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.090870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.409 [2024-11-20 10:00:52.091041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.409 [2024-11-20 10:00:52.091067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.409 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.091156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.091183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.091265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.091292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.091414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.091440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.091521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.091547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.091633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.091659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.091748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.091774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.091860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.091886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.091968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.091993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.092087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.092113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.092219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.092245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.092333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.092361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.092459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.092486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.092576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.092609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.092697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.092723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.092807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.092833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.092932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.092958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.093087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.093114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.093228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.093254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.093343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.093369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.093518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.093544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.093627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.093653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.093748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.093774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.093862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.093888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.093973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.093999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.094078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.094103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.094195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.094223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.094301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.094334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.094414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.094441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.094529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.094555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.094640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.094666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.094783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.094809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.094893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.094920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.095012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.095038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.095130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.095171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.095271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.410 [2024-11-20 10:00:52.095298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.410 qpair failed and we were unable to recover it. 00:27:15.410 [2024-11-20 10:00:52.095393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.095419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.095513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.095539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.095632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.095658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.095794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.095820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.095909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.095936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.096037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.096078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.096170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.096199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.096322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.096350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.096441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.096467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.096548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.096574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.096666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.096692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.096773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.096800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.096888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.096914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.097052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.097175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.097280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.097401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.097542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.097655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.097767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.097893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.097981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.098125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.098239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.098352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.098468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.098577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.098695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.098839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.098952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.098980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.099060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.099086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.099203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.099230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.099318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.099345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.099425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.099451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.411 [2024-11-20 10:00:52.099550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.411 [2024-11-20 10:00:52.099576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.411 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.099692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.099717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.099799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.099826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.099909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.099935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.100020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.100048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.100129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.100156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.100243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.100270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.100423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.100451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.100536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.100563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.100664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.100690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.100778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.100806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.100893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.100919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.101039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.101066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.101152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.101179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.101270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.101296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.101423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.101463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.101592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.101621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.101706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.101733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.101861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.101887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.101991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.102018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.102136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.102162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.102268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.102294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.102399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.102425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.102511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.102537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.102625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.102652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.102734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.102760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.102873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.102900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.102980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.103084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.103190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.103331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.103461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.103577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.103695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.103835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.412 [2024-11-20 10:00:52.103952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.412 [2024-11-20 10:00:52.103978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.412 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.104063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.104089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.104187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.104213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.104298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.104342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.104452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.104478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.104588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.104614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.104744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.104770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.104894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.104923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.105044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.105154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.105266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.105394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.105538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.105658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.105772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.105893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.105979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.106005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.106084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.106111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.106195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.106222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.106360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.106389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.106472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.106499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.106611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.106637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.106763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.106789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.106881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.106907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.106998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.107024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.107117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.107145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.107280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.107314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.107415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.107442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.107563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.107590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.107672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.107698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.107782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.107808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.107900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.107926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.108021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.108060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.108154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.108182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.108278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.108319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.413 [2024-11-20 10:00:52.108440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.413 [2024-11-20 10:00:52.108466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.413 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.108560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.108586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.108670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.108700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.108780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.108808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.108890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.108917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.109043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.109083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.109227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.109255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.109359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.109386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.109478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.109505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.109623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.109649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.109736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.109762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.109852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.109880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.109964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.109991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.110078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.110104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.110208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.110233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.110328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.110355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.110488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.110514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.110638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.110664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.110754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.110780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.110861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.110887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.111031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.111154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.111259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.111408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.111518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.111630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.111743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.111864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.111989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.112015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.112098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.112124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.112222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.112251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.112336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.414 [2024-11-20 10:00:52.112363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.414 qpair failed and we were unable to recover it. 00:27:15.414 [2024-11-20 10:00:52.112456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.112484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.112570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.112597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.112711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.112737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.112826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.112852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.112937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.112964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.113042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.113068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.113153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.113179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.113316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.113342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.113430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.113456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.113546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.113571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.113656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.113686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.113775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.113802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.113897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.113925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.114947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.114973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.115059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.115085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.115184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.115209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.115297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.115331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.115416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.115442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.115554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.115579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.115662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.115688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.115773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.115799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.115909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.115935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.116125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.116152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.116238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.116264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.116466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.116493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.116581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.116609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.415 [2024-11-20 10:00:52.116801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.415 [2024-11-20 10:00:52.116827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.415 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.116907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.116933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.117026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.117052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.117178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.117204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.117291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.117324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.117411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.117438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.117549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.117575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.117663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.117689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.117801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.117827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.118017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.118043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.118170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.118196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.118314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.118354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.118445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.118472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.118584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.118610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.118694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.118719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.118800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.118826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.118909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.118941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.119037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.119064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.119261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.119287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.119379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.119407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.119518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.119545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.119627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.119653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.119738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.119764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.119850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.119877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.119981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.120113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.120236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.120354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.120460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.120568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.120694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.120810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.120923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.120952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.121039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.121066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.121160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.121188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.416 [2024-11-20 10:00:52.121278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.416 [2024-11-20 10:00:52.121312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.416 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.121399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.121425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.121507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.121534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.121636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.121663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.121752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.121778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.121866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.121893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.122933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.122960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.123078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.123194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.123318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.123432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.123546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.123666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.123788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.123893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.123981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.124096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.124232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.124354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.124470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.124593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.124705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.124822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.124937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.124965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.125051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.125078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.125158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.125186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.125278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.125309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.125402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.417 [2024-11-20 10:00:52.125429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.417 qpair failed and we were unable to recover it. 00:27:15.417 [2024-11-20 10:00:52.125547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.125574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.125655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.125681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.125761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.125788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.125887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.125914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.126001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.126029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.126119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.126146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.126226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.126252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.126344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.126371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.126480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.126506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.126625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.126653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.126772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.126799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.126912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.126938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.127900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.127926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.128062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.128178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.128329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.128438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.128551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.128665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.128784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.128896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.128981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.129008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.129088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.129115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.129228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.129257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.129389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.129417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.129500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.129526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.129619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.129645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.418 qpair failed and we were unable to recover it. 00:27:15.418 [2024-11-20 10:00:52.129726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.418 [2024-11-20 10:00:52.129752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.129840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.129866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.129951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.129978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.130100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.130127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.130219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.130245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.130336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.130364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.130453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.130481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.130599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.130626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.130721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.130747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.130832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.130858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.130955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.130984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.131104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.131130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.131211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.131237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.131321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.131348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.131467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.131495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.131582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.131609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.131731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.131758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.131852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.131880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.131968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.131994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.132079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.132105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.132219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.132246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.419 qpair failed and we were unable to recover it. 00:27:15.419 [2024-11-20 10:00:52.132331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.419 [2024-11-20 10:00:52.132357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.132435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.132461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.132553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.132580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.132685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.132711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.132795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.132822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.132900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.132927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.133953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.133979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.134063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.134090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.134174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.134200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.134287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.134329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.134426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.134452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.134534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.134561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.134652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.134679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.134873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.134900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.134984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.135109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.135240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.135358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.135503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.135621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.135733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.135845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.135954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.135981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.136069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.136096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.136223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.136262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.136359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.136387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.136490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.136518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.136629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.136655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.136744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.136771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.420 qpair failed and we were unable to recover it. 00:27:15.420 [2024-11-20 10:00:52.136890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.420 [2024-11-20 10:00:52.136916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.137030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.137258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.137391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.137532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.137647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.137758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.137870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.137982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.138087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.138200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.138324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.138450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.138557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.138662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.138777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.138894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.138920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.139002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.139028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.139111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.139137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.139227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.139253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.139375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.139405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.139498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.139524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.139607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.139633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.139718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.139746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.139835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.139864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.140944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.140972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.141059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.141085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.141167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.141193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.421 [2024-11-20 10:00:52.141276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.421 [2024-11-20 10:00:52.141316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.421 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.141434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.141460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.141553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.141581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.141661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.141687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.141801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.141827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.141913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.141939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.142057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.142166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.142315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.142433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.142548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.142656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.142774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.142884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.142990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.143099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.143222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.143341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.143463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.143572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.143690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.143807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.143918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.143946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.144028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.144055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.144139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.144164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.144250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.144276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.144376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.144403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.144498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.144525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.144642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.144668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.144790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.144815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.144912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.144937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.145018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.145044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.145127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.145155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.145247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.145291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.145398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.145426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.145513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.145541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.145742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.145768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.145848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.145874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.422 qpair failed and we were unable to recover it. 00:27:15.422 [2024-11-20 10:00:52.145957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.422 [2024-11-20 10:00:52.145983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.146098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.146125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.146214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.146241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.146330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.146358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.146454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.146495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.146601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.146630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.146719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.146746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.146832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.146858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.146940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.146966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.147067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.147176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.147280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.147411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.147539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.147651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.147758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.147862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.147975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.148091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.148196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.148312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.148424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.148538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.148646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.148782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.148892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.148919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.149924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.149951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.150034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.423 [2024-11-20 10:00:52.150061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.423 qpair failed and we were unable to recover it. 00:27:15.423 [2024-11-20 10:00:52.150145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.150171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.150259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.150288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.150383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.150410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.150528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.150554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.150640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.150667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.150754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.150780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.150860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.150886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.150971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.150999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.151086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.151113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.151197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.151224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.151315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.151341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.151475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.151500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.151623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.151649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.151732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.151758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.151852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.151877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.151965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.151992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.152081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.152108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.152191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.152217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.152298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.152330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.152432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.152471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.152562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.152589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.152672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.152704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.152793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.152820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.152907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.152933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.153953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.153979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.154074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.154099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.154231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.424 [2024-11-20 10:00:52.154258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.424 qpair failed and we were unable to recover it. 00:27:15.424 [2024-11-20 10:00:52.154357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.154384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.154496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.154522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.154603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.154629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.154708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.154734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.154814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.154840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.154936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.154964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.155057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.155084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.155167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.155193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.155282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.155314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.155429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.155455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.155544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.155571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.155703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.155729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.155812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.155838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.155932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.155971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.156086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.156115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.156217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.156254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.156362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.156393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.156481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.156507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.156599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.156628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.156713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.156741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.156824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.156850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.156933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.156959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.157072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.157098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.157175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.157201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.157332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.157358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.157434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.157460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.157572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.157598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.157680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.157705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.157797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.157830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.157928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.157954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.158086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.158113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.158208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.158235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.158339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.158375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.158463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.158490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.158580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.158607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.158694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.158719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.158808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.158834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.158913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.425 [2024-11-20 10:00:52.158940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.425 qpair failed and we were unable to recover it. 00:27:15.425 [2024-11-20 10:00:52.159052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.159078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.159161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.159188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.159279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.159312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.159419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.159445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.159529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.159554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.159638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.159663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.159757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.159783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.159870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.159896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.160964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.160990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.161099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.161125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.161214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.161239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.161325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.161352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.161435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.161460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.161541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.161567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.161646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.161672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.161754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.161781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.161890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.161916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.162933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.162959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.163043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.163069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.163165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.163191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.163309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.163336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.426 [2024-11-20 10:00:52.163424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.426 [2024-11-20 10:00:52.163449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.426 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.163540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.163567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.163652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.163680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.163773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.163800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.163884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.163911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.164035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.164062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.164148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.164175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.164269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.164320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.164415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.164442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.164531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.164557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.164647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.164674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.164784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.164810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.164894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.164920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.165032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.165059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.165145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.165176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.165311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.165339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.165425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.165452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.165540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.165567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.165684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.165716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.165808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.165835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.165922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.165949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.166058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.166167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.166273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.166401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.166562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.166678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.166782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.166897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.166986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.167106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.167210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.167326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.167437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.167545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.167658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.167815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.167954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.167981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.427 [2024-11-20 10:00:52.168068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.427 [2024-11-20 10:00:52.168095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.427 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.168173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.168199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.168282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.168316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.168400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.168427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.168553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.168580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.168673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.168700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.168791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.168817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.168903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.168930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.169013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.169040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.169133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.169160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.169308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.169335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.169420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.169447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.169534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.169560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.169639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.169665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.169746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.169772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.169868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.169908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.170026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.170054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.170145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.170171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.170288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.170320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.170412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.170439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.170524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.170556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.170647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.170673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.170763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.170789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.170901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.170927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.171964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.171990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.172073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.428 [2024-11-20 10:00:52.172101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 10:00:52.172319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.172346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.172432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.172458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.172648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.172675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.172757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.172783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.172866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.172892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.172982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.173009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.173100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.173127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.173237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.173263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.173357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.173384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.173468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.173494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.173585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.173611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.173721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.173747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.173829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.173855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.173976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.174011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.174103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.174143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.174236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.174264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.174365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.174394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.174479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.174507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.174617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.174648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.174758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.174784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.174869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.174896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.174982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.175009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.175107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.175147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.175249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.175276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.175387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.175429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.175529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.175557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.175644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.175679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.175822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.175848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.175964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.175990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.176080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.176106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.176195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.176223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.176319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.176358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.176454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.176481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.176567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.176593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.176679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.176704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.176819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.176845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.176932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.176960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.177068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.429 [2024-11-20 10:00:52.177109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 10:00:52.177216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.177255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.177363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.177392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.177484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.177511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.177589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.177615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.177691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.177717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.177810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.177838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.177945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.177972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.178092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.178121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.178252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.178280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.178378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.178409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.178496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.178524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.178664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.178691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.178770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.178796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.178876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.178903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.178986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.179011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.179106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.179145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.179240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.179266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.179357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.179384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.179501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.179527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.179621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.179651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.179737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.179764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.179872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.179899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.179983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.180010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.180094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.180121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.180214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.180240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.180394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.180426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.180564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.180591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.180677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.180703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.180781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.180813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.180903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.180930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.181039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.181065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.181177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.181203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.181300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.181347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.181469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.181497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.181592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.181621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.181714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.181741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.181826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.181852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.181963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.430 [2024-11-20 10:00:52.181989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 10:00:52.182068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.182095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.182178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.182204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.182290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.182325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.182423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.182452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.182545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.182573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.182657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.182683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.182763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.182789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.182878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.182905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.182990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.183016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.183103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.183129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.183214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.183240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.183335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.183361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.183478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.183504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.183586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.183613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.183695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.183721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.183863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.183889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.183997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.184022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.184155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.184195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.184287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.184321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.184408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.184435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.184524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.184552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.184665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.184691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.184817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.184844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.184930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.184958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.185057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.185096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.185190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.185218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.185332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.185359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.185451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.185478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.185566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.185593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.185689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.185717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.185802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.185835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.185915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.185941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.186049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.186074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.186155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.186181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.186266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.186292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.186379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.186405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.186490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.186516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.431 [2024-11-20 10:00:52.186625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.431 [2024-11-20 10:00:52.186650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.431 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.186763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.186791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.186876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.186902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.186981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.187088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.187195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.187307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.187428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.187544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.187657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.187770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.187896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.187925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.188011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.188036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.188149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.188176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.188292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.188330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.188418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.188445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.188526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.188552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.188667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.188693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.188774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.188800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.188916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.188943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.189027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.189060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.189139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.189165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.189278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.189312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.189438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.189465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.189551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.189579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.189666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.189694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.189777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.189805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.189897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.189923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.190037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.190065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.190144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.190171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.190291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.190325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.190415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.190442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.190523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.190549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.190658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.190684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.190775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.190802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.190916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.190944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.191027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.191054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.191157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.191196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.432 [2024-11-20 10:00:52.191328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.432 [2024-11-20 10:00:52.191356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.432 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.191467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.191494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.191582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.191608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.191696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.191722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.191808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.191834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.191959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.191986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.192103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.192215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.192333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.192466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.192578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.192691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.192797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.192904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.192990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.193107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.193214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.193321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.193437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.193560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.193679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.193800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.193906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.193937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.194017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.194043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.194129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.194156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.194240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.194267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.194354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.194382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.194468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.194495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.433 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.194592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:15.433 [2024-11-20 10:00:52.194628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.194730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.433 [2024-11-20 10:00:52.194758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.194838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.433 [2024-11-20 10:00:52.194865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.433 [2024-11-20 10:00:52.194974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.195001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.195145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.433 [2024-11-20 10:00:52.195172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.433 qpair failed and we were unable to recover it. 00:27:15.433 [2024-11-20 10:00:52.195269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.195298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.195409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.195435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.195521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.195547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.195637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.195664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.195750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.195776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.195871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.195897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.196033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.196145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.196273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.196404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.196537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.196688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.196796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.196900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.196998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.197024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.197102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.197127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.197205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.197231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.197360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.197388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.197487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.197526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.197647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.197675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.197783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.197809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.197904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.197931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.198041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.198067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.198153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.198180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.198266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.198294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.198419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.198445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.198530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.198556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.198672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.198703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.198787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.198812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.198900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.198926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.199008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.199033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.199114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.199139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.199221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.199247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.199346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.199372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.199486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.199511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.199588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.199622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.199719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.199745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.434 [2024-11-20 10:00:52.199860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.434 [2024-11-20 10:00:52.199886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.434 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.199997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.200023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.200111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.200138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.200230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.200256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.200358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.200387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.200498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.200525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.200613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.200639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.200718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.200744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.200840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.200880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.200981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.201112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.201218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.201341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.201449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.201557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.201696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.201802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.201905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.201934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.202032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.202073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.202181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.202221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.202359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.202387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.202472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.202497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.202575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.202601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.202678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.202704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.202787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.202813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.202911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.202940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.203029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.203056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.203157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.203186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.203267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.203311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.203400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.203426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.203539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.203566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.203667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.203693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.203830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.203855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.203958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.203986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.204102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.204129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.204232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.204272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.204383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.204412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.204498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.204525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.435 [2024-11-20 10:00:52.204619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.435 [2024-11-20 10:00:52.204646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.435 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.204737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.204766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.204851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.204878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.204977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.205004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.205099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.205126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.205236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.205277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.205406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.205439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.205522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.205549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.205662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.205690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.205786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.205814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.205908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.205935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.206048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.206155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.206309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.206426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.206534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.206675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.206786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.206894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.206988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.207021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.207118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.207145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.207260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.207288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.207391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.207417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.207502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.207529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.207646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.207672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.207764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.207791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.207878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.207907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.207995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.208022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.208105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.208131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.208247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.208274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.208372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.208398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.208479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.208505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.208586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.208621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.208780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.208820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.208919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.208948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.209031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.209058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.209170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.209197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.209282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.209313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.436 [2024-11-20 10:00:52.209398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.436 [2024-11-20 10:00:52.209424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.436 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.209508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.209535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.209624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.209650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.209746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.209773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.209860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.209888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.210925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.210951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.211061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.211093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.211177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.211205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.211298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.211332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.211420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.211450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.211545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.211572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.211692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.211718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.211811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.437 [2024-11-20 10:00:52.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.211920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.211950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.212049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.212077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.437 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.437 [2024-11-20 10:00:52.212179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.212218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.212318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.212357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.212447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.212474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.212557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.212583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.212675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.212703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.212785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.212812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.212922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.212949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.213031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.213057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.213141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.213168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.437 qpair failed and we were unable to recover it. 00:27:15.437 [2024-11-20 10:00:52.213252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.437 [2024-11-20 10:00:52.213280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.213411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.213446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.213531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.213558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.213651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.213677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.213777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.213804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.213887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.213914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.213998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.214025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.214123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.214152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.214234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.214261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.214367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.214407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.214500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.214527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.214624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.214653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.214737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.214764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.214855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.214882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.215000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.215027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.215134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.215173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.215310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.215338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.215431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.215458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.215543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.215569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.215664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.215690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.215768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.215795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.215912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.215938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.216050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.216163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.216288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.216417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.216552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.216665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.216783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.216896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.216995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.217022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.217120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.217160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.217272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.217330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.217449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.217476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.217570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.217598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.217682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.217708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.217796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.217823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.217910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.438 [2024-11-20 10:00:52.217938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.438 qpair failed and we were unable to recover it. 00:27:15.438 [2024-11-20 10:00:52.218024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.218051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.218135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.218162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.218275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.218307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.218396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.218431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.218545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.218572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.218661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.218686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.218771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.218798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.218889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.218916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.219965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.219991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.220156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.220182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.220316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.220357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.220449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.220477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.220575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.220602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.220711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.220737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.220820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.220848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.220935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.220964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.221089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.221115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.221227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.221252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.221348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.221374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.221463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.221489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.221578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.221610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.221722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.221750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.221841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.221873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.221962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.221989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.222112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.222141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.222264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.222289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.222391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.222417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.222509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.222536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.222631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.222659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.222769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.439 [2024-11-20 10:00:52.222795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.439 qpair failed and we were unable to recover it. 00:27:15.439 [2024-11-20 10:00:52.222878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.222903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.222986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.223012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.223121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.223146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.223227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.223253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.223370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.223397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.223477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.223502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.223615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.223641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.223726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.223752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.223841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.223867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.223980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.224007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.224105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.224145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.224236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.224263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.224371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.224401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.224554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.224581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.224667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.224693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.224816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.224843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.224927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.224955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.225066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.225106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.225197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.225225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.225330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.225363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.225484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.225510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.225603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.225630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.225730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.225760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.225857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.225883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.225965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.225991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.226081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.226106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.226202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.226227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.226323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.226349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.226427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.226453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.226559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.226584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.226704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.226730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.226808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.226834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.226919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.226947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.227039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.227065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.227178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.227204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.227284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.227314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.227403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.227429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.227509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.440 [2024-11-20 10:00:52.227535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.440 qpair failed and we were unable to recover it. 00:27:15.440 [2024-11-20 10:00:52.227623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.227649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.227731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.227757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.227897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.227923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.228036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.228063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.228146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.228172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.228273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.228320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.228412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.228440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.228534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.228561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.228652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.228678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.228800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.228826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.228913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.228939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.229019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.229045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.229133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.229161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.229269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.229315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.229418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.229457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.229587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.229614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.229721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.229748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.229835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.229861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.229970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.229996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.230106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.230131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.230211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.230237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.230334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.230362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.230459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.230488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.230573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.230599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.230689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.230716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.230827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.230854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.230945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.230973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.231084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.231111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.231205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.231232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.231336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.231364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.231442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.231467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.231547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.231573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.231689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.231715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.231805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.231832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.231917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.231943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.232030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.232057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.232139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.232165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.441 [2024-11-20 10:00:52.232251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.441 [2024-11-20 10:00:52.232278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.441 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.232389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.232428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.232526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.232553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.232644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.232670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.232754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.232781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.232863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.232889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.233029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.233054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.233138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.233164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.233253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.233278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.233410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.233449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.233539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.233568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.233698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.233736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.233825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.233851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.233934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.233961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.234038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.234064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.234149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.234175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.234281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.234327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.234420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.234448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.234527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.234554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.234669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.234694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.234778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.234804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.234917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.234942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.235084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.235192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.235308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.235425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.235536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.235764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.235885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.235975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.236003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.236114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.236141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.236258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.236285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.236371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.236397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.442 qpair failed and we were unable to recover it. 00:27:15.442 [2024-11-20 10:00:52.236484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.442 [2024-11-20 10:00:52.236511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.236603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.236629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.236769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.236795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.236883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.236909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.237035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.237062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.237172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.237197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.237326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.237354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.237444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.237472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.237557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.237583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.237666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.237692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.237791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.237831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.237920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.237948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.238062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.238089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.238171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.238197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.238290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.238329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.238421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.238447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.238530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.238555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.238647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.238677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.238763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.238789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.238877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.238903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.239003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.239042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.239141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.239171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.239262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.239290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.239390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.239418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.239498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.239525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.239622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.239648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.239732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.239758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.239838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.239864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.240966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.240992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.241066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.443 [2024-11-20 10:00:52.241093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.443 qpair failed and we were unable to recover it. 00:27:15.443 [2024-11-20 10:00:52.241185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.241214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.241321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.241349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.241443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.241469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.241553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.241579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.241671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.241697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.241812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.241840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.241920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.241951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.242039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.242066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.242148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.242174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.242259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.242285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.242375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.242402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.242502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.242531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.242626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.242653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.242737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.242763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.242877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.242903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.243040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.243148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.243256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.243387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.243537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.243668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.243782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.243894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.243981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.244007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.244100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.244128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.244249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.244277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.244412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.244442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.244535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.244561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.244674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.244700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.244784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.244810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.244906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.245021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.245047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.245152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.245192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.245281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.245317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.245414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.245442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.245529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.245555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.245643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.245670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.245750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.444 [2024-11-20 10:00:52.245776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.444 qpair failed and we were unable to recover it. 00:27:15.444 [2024-11-20 10:00:52.245865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.245892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.245991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.246031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.246119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.246147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.246230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.246256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.246378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.246406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.246508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.246534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.246629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.246655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.246740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.246765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.246862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.246897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.246997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.247024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.247124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.247164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.247263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.247299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.247418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.247448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.247567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.247603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.247684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.247711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.247793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.247818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.247920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.247959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.248055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.248083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.248186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.248225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.248330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.248359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.445 [2024-11-20 10:00:52.248446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.445 [2024-11-20 10:00:52.248472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.445 qpair failed and we were unable to recover it. 00:27:15.709 [2024-11-20 10:00:52.248556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-11-20 10:00:52.248582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-11-20 10:00:52.248677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-11-20 10:00:52.248702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-11-20 10:00:52.248783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-11-20 10:00:52.248808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-11-20 10:00:52.248891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-11-20 10:00:52.248916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-11-20 10:00:52.249003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.249117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.249238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.249366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.249477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 Malloc0 00:27:15.710 [2024-11-20 10:00:52.249603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.249731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.249846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.249957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.249983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.250072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.710 [2024-11-20 10:00:52.250099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.250192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.250217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.250312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:15.710 [2024-11-20 10:00:52.250338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.250426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.250452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.710 [2024-11-20 10:00:52.250533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.250560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.250658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.250684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.710 [2024-11-20 10:00:52.250770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.250795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.250898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.250923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.251020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.251047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.251135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.251161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.251244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.251269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.251405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.251431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.251559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.251584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.251677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.251703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.251785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.251810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.251921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.251946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.252021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.252047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.252125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.252151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.252237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.252262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.252361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.252392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.252508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.252535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.252627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.252664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.252761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.252789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.252878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.252907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.253041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.253070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-11-20 10:00:52.253157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-11-20 10:00:52.253184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.253309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.253316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.711 [2024-11-20 10:00:52.253339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.253431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.253456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.253542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.253568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.253659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.253685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.253768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.253794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.253883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.253909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.254023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.254051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.254140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.254167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.254278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.254312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.254397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.254423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.254516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.254543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.254641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.254668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.254776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.254803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.254894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.254933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.255967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.255993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.256117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.256145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.256240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.256267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.256371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.256410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.256499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.256527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.256629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.256657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.256774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.256800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.256913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.256939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.257024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.257050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.257139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.257164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.257252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.257280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.257376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.257403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.257490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.257516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.257604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.257630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.257716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-11-20 10:00:52.257741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-11-20 10:00:52.257821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.257847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.257959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.257985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.258080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.258108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.258189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.258224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.258323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.258350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.258430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.258456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.258553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.258582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.258674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.258701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.258781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.258808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.258896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.258922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.259036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.259145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.259257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.259395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.259510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.259686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.259797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.259906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.259993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.260021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.260100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.260125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.260212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.260239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.260337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.260363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.260465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.260504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.260633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.260661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.260777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.260803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.260892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.260919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.261009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.261036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.261125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.261151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.261237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.261264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.261365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.261391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.261487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.261516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.712 [2024-11-20 10:00:52.261633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.261659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.261739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.261766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.712 [2024-11-20 10:00:52.261880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.261907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.712 [2024-11-20 10:00:52.261988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.262014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.262109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-11-20 10:00:52.262135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.712 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-11-20 10:00:52.262230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.262257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.262362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.262389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.262474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.262500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.262630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.262655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.262768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.262793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.262877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.262902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.263018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.263046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.263142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.263181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.263276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.263325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.263422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.263450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.263564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.263591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.263686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.263712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.263827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.263853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.263965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.263993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.264111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.264138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.264233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.264260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.264374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.264401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.264488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.264514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.264592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.264617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.264730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.264759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.264872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.264898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.264981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.265088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.265208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.265327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.265449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.265561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.265704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.265818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.265932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.265958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.266047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.266074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.266152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.266178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.266270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.266318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.266421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.266449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.266547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.266575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.266659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.266685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-11-20 10:00:52.266767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-11-20 10:00:52.266793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.266872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.266900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.266987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.267014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.267099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.267128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.267219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.267245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.267324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.267352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.267486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.267513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.267597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.267717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.267743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.267855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.267882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.267975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.268001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.268088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.268115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.268196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.268222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.268326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.268355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.268453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.268492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.268592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.268621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.268740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.268768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.268888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.268915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.269000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.269028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.269117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.269145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.269231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.269257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.269394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.269420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.269508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.269534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.714 [2024-11-20 10:00:52.269627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.269660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.269748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.269774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.714 [2024-11-20 10:00:52.269891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.269919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.270002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.714 [2024-11-20 10:00:52.270028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.270118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.270146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.714 [2024-11-20 10:00:52.270231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.270265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.270375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.270405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.270500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.270527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-11-20 10:00:52.270658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-11-20 10:00:52.270684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.270796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.270822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.270935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.270961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.271050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.271076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.271165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.271192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.271279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.271326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.271413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.271440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.271538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.271565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.271651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.271679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.271784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.271810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.271896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.271923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.272962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.272988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.273072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.273098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.273190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.273229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.273330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.273359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.273454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.273481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.273568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.273594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.273715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.273742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.273831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.273858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.273948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.273976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.274115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.274225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.274351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.274467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.274576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.274687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.274804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.274913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-11-20 10:00:52.274996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-11-20 10:00:52.275022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.275108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.275136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.275225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.275251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.275359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.275387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.275471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.275498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.275579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.275605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.275687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.275713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.275833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.275859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.275955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.275982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.276971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.276998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.277079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.277105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.277184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.277210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.277318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.277345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.277436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.277481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.277564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.277591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.716 [2024-11-20 10:00:52.277673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.277700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.277828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.716 [2024-11-20 10:00:52.277854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.277956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.277982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.716 [2024-11-20 10:00:52.278070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.278096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.716 [2024-11-20 10:00:52.278180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.278207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.278293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.278325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.278403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.278429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.278515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.278542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.278656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.278682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.278758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-11-20 10:00:52.278785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-11-20 10:00:52.278911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.278937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.279049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.279193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.279315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.279443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.279549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.279664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.279780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.279900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.279999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.280029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.280134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.280174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.280275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.280310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.280407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.280433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.280526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.280554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.280647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.280674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a30000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.280769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.280797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.280879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.280905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.280988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.281014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.281105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.281131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a2c000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.281218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.281245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120dfa0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.281349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.281378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.281463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-11-20 10:00:52.281489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a38000b90 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.281584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.717 [2024-11-20 10:00:52.284126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.717 [2024-11-20 10:00:52.284239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.717 [2024-11-20 10:00:52.284267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.717 [2024-11-20 10:00:52.284283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.717 [2024-11-20 10:00:52.284314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.717 [2024-11-20 10:00:52.284352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.717 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.717 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.717 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.717 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.717 10:00:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3850553 00:27:15.717 [2024-11-20 10:00:52.293987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.717 [2024-11-20 10:00:52.294079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.717 [2024-11-20 10:00:52.294109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.717 [2024-11-20 10:00:52.294125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.717 [2024-11-20 10:00:52.294139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.717 [2024-11-20 10:00:52.294170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.303972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.717 [2024-11-20 10:00:52.304060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.717 [2024-11-20 10:00:52.304087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.717 [2024-11-20 10:00:52.304101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.717 [2024-11-20 10:00:52.304115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.717 [2024-11-20 10:00:52.304145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.314085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.717 [2024-11-20 10:00:52.314182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.717 [2024-11-20 10:00:52.314208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.717 [2024-11-20 10:00:52.314223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.717 [2024-11-20 10:00:52.314236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.717 [2024-11-20 10:00:52.314266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-11-20 10:00:52.324030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.717 [2024-11-20 10:00:52.324123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.717 [2024-11-20 10:00:52.324148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.324163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.324176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.324214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.333955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.334040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.334066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.334080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.334093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.334124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.343964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.344047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.344072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.344086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.344099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.344130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.354029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.354120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.354149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.354164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.354177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.354207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.364184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.364301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.364334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.364348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.364361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.364391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.374184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.374280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.374316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.374332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.374346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.374376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.384123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.384205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.384232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.384247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.384260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.384291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.394237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.394354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.394382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.394397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.394410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.394442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.404151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.404241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.404267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.404282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.404297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.404336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.414195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.414289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.414327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.414344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.414357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.414389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.424199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.424285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.424320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.424336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.424350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.424380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.434242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.434348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.434374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.434388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.434402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.434434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.444275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.444383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.444408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.718 [2024-11-20 10:00:52.444423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.718 [2024-11-20 10:00:52.444436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.718 [2024-11-20 10:00:52.444467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-11-20 10:00:52.454341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.718 [2024-11-20 10:00:52.454428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.718 [2024-11-20 10:00:52.454455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.454470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.454496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.454531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.464345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.464426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.464452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.464467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.464480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.464512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.474372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.474461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.474488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.474503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.474516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.474548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.484403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.484491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.484517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.484532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.484545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.484575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.494458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.494542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.494569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.494583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.494597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.494629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.504463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.504539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.504564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.504578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.504594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.504625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.514498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.514588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.514614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.514628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.514641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.514670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.524558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.524679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.524703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.524717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.524730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.524760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.534548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.534634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.534661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.534676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.534689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.534719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.544656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.544755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.544786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.544802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.544815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.544845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.554629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.554752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.554778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.554792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.554805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.554835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-11-20 10:00:52.564618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.719 [2024-11-20 10:00:52.564705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.719 [2024-11-20 10:00:52.564731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.719 [2024-11-20 10:00:52.564744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.719 [2024-11-20 10:00:52.564757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.719 [2024-11-20 10:00:52.564788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.720 [2024-11-20 10:00:52.574676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.720 [2024-11-20 10:00:52.574763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.720 [2024-11-20 10:00:52.574789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.720 [2024-11-20 10:00:52.574803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.720 [2024-11-20 10:00:52.574816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.720 [2024-11-20 10:00:52.574846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-11-20 10:00:52.584769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.720 [2024-11-20 10:00:52.584869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.720 [2024-11-20 10:00:52.584895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.720 [2024-11-20 10:00:52.584909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.720 [2024-11-20 10:00:52.584928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.720 [2024-11-20 10:00:52.584961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-11-20 10:00:52.594726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.720 [2024-11-20 10:00:52.594811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.720 [2024-11-20 10:00:52.594836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.720 [2024-11-20 10:00:52.594851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.720 [2024-11-20 10:00:52.594864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.720 [2024-11-20 10:00:52.594893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-11-20 10:00:52.604728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.720 [2024-11-20 10:00:52.604820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.720 [2024-11-20 10:00:52.604846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.720 [2024-11-20 10:00:52.604860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.720 [2024-11-20 10:00:52.604873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.720 [2024-11-20 10:00:52.604903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-11-20 10:00:52.614781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.720 [2024-11-20 10:00:52.614869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.720 [2024-11-20 10:00:52.614895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.720 [2024-11-20 10:00:52.614909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.720 [2024-11-20 10:00:52.614922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.720 [2024-11-20 10:00:52.614953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.978 [2024-11-20 10:00:52.624802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.978 [2024-11-20 10:00:52.624887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.978 [2024-11-20 10:00:52.624913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.978 [2024-11-20 10:00:52.624928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.978 [2024-11-20 10:00:52.624941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.978 [2024-11-20 10:00:52.624971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.978 qpair failed and we were unable to recover it. 00:27:15.978 [2024-11-20 10:00:52.634825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.978 [2024-11-20 10:00:52.634918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.978 [2024-11-20 10:00:52.634944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.978 [2024-11-20 10:00:52.634958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.978 [2024-11-20 10:00:52.634971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.978 [2024-11-20 10:00:52.635002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.978 qpair failed and we were unable to recover it. 00:27:15.978 [2024-11-20 10:00:52.644893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.978 [2024-11-20 10:00:52.645019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.978 [2024-11-20 10:00:52.645044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.978 [2024-11-20 10:00:52.645059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.978 [2024-11-20 10:00:52.645071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.978 [2024-11-20 10:00:52.645103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.978 qpair failed and we were unable to recover it. 00:27:15.978 [2024-11-20 10:00:52.654919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.978 [2024-11-20 10:00:52.655006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.978 [2024-11-20 10:00:52.655030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.978 [2024-11-20 10:00:52.655044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.978 [2024-11-20 10:00:52.655058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.978 [2024-11-20 10:00:52.655088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.978 qpair failed and we were unable to recover it. 00:27:15.978 [2024-11-20 10:00:52.664894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.978 [2024-11-20 10:00:52.664987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.978 [2024-11-20 10:00:52.665013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.978 [2024-11-20 10:00:52.665027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.978 [2024-11-20 10:00:52.665040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.978 [2024-11-20 10:00:52.665071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.978 qpair failed and we were unable to recover it. 00:27:15.978 [2024-11-20 10:00:52.674996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.978 [2024-11-20 10:00:52.675104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.978 [2024-11-20 10:00:52.675135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.675150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.675163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.675193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.684982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.685065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.685091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.685106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.685119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.685152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.695000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.695088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.695114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.695129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.695142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.695172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.705002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.705085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.705111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.705125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.705139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.705169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.715041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.715133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.715158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.715179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.715193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.715224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.725064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.725147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.725173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.725187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.725201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.725231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.735103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.735181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.735207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.735222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.735235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.735265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.745155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.745287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.745322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.745339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.745352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.745383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.755263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.755358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.755384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.755399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.755413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.755449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.979 [2024-11-20 10:00:52.765186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.979 [2024-11-20 10:00:52.765269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.979 [2024-11-20 10:00:52.765295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.979 [2024-11-20 10:00:52.765318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.979 [2024-11-20 10:00:52.765332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.979 [2024-11-20 10:00:52.765363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.979 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.775188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.775276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.775308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.775326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.775340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.775371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.785217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.785295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.785329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.785344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.785356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.785388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.795371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.795460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.795486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.795501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.795514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.795545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.805288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.805396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.805423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.805437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.805450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.805494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.815348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.815449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.815475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.815489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.815502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.815533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.825343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.825468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.825493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.825508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.825522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.825553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.835397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.835512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.835538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.835552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.835566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.835598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.845396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.845484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.845510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.845531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.845545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.845576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.855420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.855503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.855528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.855543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.980 [2024-11-20 10:00:52.855556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.980 [2024-11-20 10:00:52.855586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.980 qpair failed and we were unable to recover it. 00:27:15.980 [2024-11-20 10:00:52.865556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.980 [2024-11-20 10:00:52.865642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.980 [2024-11-20 10:00:52.865670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.980 [2024-11-20 10:00:52.865686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.981 [2024-11-20 10:00:52.865699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.981 [2024-11-20 10:00:52.865729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.981 qpair failed and we were unable to recover it. 00:27:15.981 [2024-11-20 10:00:52.875518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.981 [2024-11-20 10:00:52.875606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.981 [2024-11-20 10:00:52.875631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.981 [2024-11-20 10:00:52.875645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.981 [2024-11-20 10:00:52.875658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.981 [2024-11-20 10:00:52.875688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.981 qpair failed and we were unable to recover it. 00:27:15.981 [2024-11-20 10:00:52.885520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.981 [2024-11-20 10:00:52.885641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.981 [2024-11-20 10:00:52.885668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.981 [2024-11-20 10:00:52.885682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.981 [2024-11-20 10:00:52.885695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:15.981 [2024-11-20 10:00:52.885731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.981 qpair failed and we were unable to recover it. 00:27:16.240 [2024-11-20 10:00:52.895535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.240 [2024-11-20 10:00:52.895637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.240 [2024-11-20 10:00:52.895663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.240 [2024-11-20 10:00:52.895677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.240 [2024-11-20 10:00:52.895690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.240 [2024-11-20 10:00:52.895722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.240 qpair failed and we were unable to recover it. 00:27:16.240 [2024-11-20 10:00:52.905659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.240 [2024-11-20 10:00:52.905742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.240 [2024-11-20 10:00:52.905768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.240 [2024-11-20 10:00:52.905782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.240 [2024-11-20 10:00:52.905795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.240 [2024-11-20 10:00:52.905827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.240 qpair failed and we were unable to recover it. 00:27:16.240 [2024-11-20 10:00:52.915646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.240 [2024-11-20 10:00:52.915739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.240 [2024-11-20 10:00:52.915766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.240 [2024-11-20 10:00:52.915780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.240 [2024-11-20 10:00:52.915795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.240 [2024-11-20 10:00:52.915824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.240 qpair failed and we were unable to recover it. 00:27:16.240 [2024-11-20 10:00:52.925654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.240 [2024-11-20 10:00:52.925742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.240 [2024-11-20 10:00:52.925769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.240 [2024-11-20 10:00:52.925784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.240 [2024-11-20 10:00:52.925796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.240 [2024-11-20 10:00:52.925827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.240 qpair failed and we were unable to recover it. 00:27:16.240 [2024-11-20 10:00:52.935659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.240 [2024-11-20 10:00:52.935751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.240 [2024-11-20 10:00:52.935777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.240 [2024-11-20 10:00:52.935791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.240 [2024-11-20 10:00:52.935805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.240 [2024-11-20 10:00:52.935835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.240 qpair failed and we were unable to recover it. 00:27:16.240 [2024-11-20 10:00:52.945748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.240 [2024-11-20 10:00:52.945833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.240 [2024-11-20 10:00:52.945858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.240 [2024-11-20 10:00:52.945873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.240 [2024-11-20 10:00:52.945886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.240 [2024-11-20 10:00:52.945917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.240 qpair failed and we were unable to recover it. 00:27:16.240 [2024-11-20 10:00:52.955734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.240 [2024-11-20 10:00:52.955829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.240 [2024-11-20 10:00:52.955855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.240 [2024-11-20 10:00:52.955870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:52.955883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:52.955915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:52.965749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:52.965831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:52.965858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:52.965872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:52.965885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:52.965916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:52.975775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:52.975861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:52.975892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:52.975908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:52.975920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:52.975951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:52.985823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:52.985906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:52.985933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:52.985947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:52.985961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:52.985991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:52.995890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:52.996006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:52.996033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:52.996047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:52.996060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:52.996092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.005861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.005963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.005989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.006003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:53.006017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:53.006046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.015953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.016042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.016069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.016084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:53.016104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:53.016138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.025938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.026021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.026046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.026061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:53.026074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:53.026105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.035985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.036077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.036103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.036117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:53.036130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:53.036161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.045994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.046081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.046106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.046121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:53.046134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:53.046164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.056026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.056142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.056168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.056182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:53.056195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:53.056225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.066036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.066130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.066156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.066170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:53.066184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:53.066214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.076239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.076354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.076380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.076394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.241 [2024-11-20 10:00:53.076407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.241 [2024-11-20 10:00:53.076438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.241 qpair failed and we were unable to recover it. 00:27:16.241 [2024-11-20 10:00:53.086165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.241 [2024-11-20 10:00:53.086308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.241 [2024-11-20 10:00:53.086345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.241 [2024-11-20 10:00:53.086359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.242 [2024-11-20 10:00:53.086373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.242 [2024-11-20 10:00:53.086403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.242 qpair failed and we were unable to recover it. 00:27:16.242 [2024-11-20 10:00:53.096190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.242 [2024-11-20 10:00:53.096268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.242 [2024-11-20 10:00:53.096294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.242 [2024-11-20 10:00:53.096317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.242 [2024-11-20 10:00:53.096331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.242 [2024-11-20 10:00:53.096364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.242 qpair failed and we were unable to recover it. 00:27:16.242 [2024-11-20 10:00:53.106261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.242 [2024-11-20 10:00:53.106350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.242 [2024-11-20 10:00:53.106382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.242 [2024-11-20 10:00:53.106396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.242 [2024-11-20 10:00:53.106410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.242 [2024-11-20 10:00:53.106443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.242 qpair failed and we were unable to recover it. 00:27:16.242 [2024-11-20 10:00:53.116186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.242 [2024-11-20 10:00:53.116280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.242 [2024-11-20 10:00:53.116312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.242 [2024-11-20 10:00:53.116328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.242 [2024-11-20 10:00:53.116342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.242 [2024-11-20 10:00:53.116372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.242 qpair failed and we were unable to recover it. 00:27:16.242 [2024-11-20 10:00:53.126235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.242 [2024-11-20 10:00:53.126328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.242 [2024-11-20 10:00:53.126354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.242 [2024-11-20 10:00:53.126368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.242 [2024-11-20 10:00:53.126381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.242 [2024-11-20 10:00:53.126412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.242 qpair failed and we were unable to recover it. 00:27:16.242 [2024-11-20 10:00:53.136247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.242 [2024-11-20 10:00:53.136336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.242 [2024-11-20 10:00:53.136363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.242 [2024-11-20 10:00:53.136377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.242 [2024-11-20 10:00:53.136390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.242 [2024-11-20 10:00:53.136421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.242 qpair failed and we were unable to recover it. 00:27:16.242 [2024-11-20 10:00:53.146362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.242 [2024-11-20 10:00:53.146460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.242 [2024-11-20 10:00:53.146512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.242 [2024-11-20 10:00:53.146528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.242 [2024-11-20 10:00:53.146546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.242 [2024-11-20 10:00:53.146594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.242 qpair failed and we were unable to recover it. 00:27:16.501 [2024-11-20 10:00:53.156364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.501 [2024-11-20 10:00:53.156499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.501 [2024-11-20 10:00:53.156525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.501 [2024-11-20 10:00:53.156540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.501 [2024-11-20 10:00:53.156553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.501 [2024-11-20 10:00:53.156585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.501 qpair failed and we were unable to recover it. 00:27:16.501 [2024-11-20 10:00:53.166344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.501 [2024-11-20 10:00:53.166433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.501 [2024-11-20 10:00:53.166458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.501 [2024-11-20 10:00:53.166472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.501 [2024-11-20 10:00:53.166486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.501 [2024-11-20 10:00:53.166516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.501 qpair failed and we were unable to recover it. 00:27:16.501 [2024-11-20 10:00:53.176362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.176474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.176499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.176513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.176527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.176557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.186423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.186507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.186532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.186546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.186559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.186591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.196441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.196533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.196558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.196573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.196586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.196616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.206480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.206569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.206596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.206610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.206623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.206656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.216469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.216553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.216579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.216593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.216606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.216637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.226517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.226598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.226624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.226639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.226652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.226683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.236604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.236696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.236728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.236743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.236757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.236790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.246605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.246732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.246761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.246778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.246790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.246822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.256630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.256714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.256740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.256754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.256767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.256798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.266697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.266825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.266850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.266864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.266878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.266908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.276694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.276787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.276814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.276835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.276849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.276881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.286697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.286784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.286810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.286829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.286843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.502 [2024-11-20 10:00:53.286874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.502 qpair failed and we were unable to recover it. 00:27:16.502 [2024-11-20 10:00:53.296794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.502 [2024-11-20 10:00:53.296882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.502 [2024-11-20 10:00:53.296909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.502 [2024-11-20 10:00:53.296923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.502 [2024-11-20 10:00:53.296937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.296969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.306732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.306816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.306842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.306857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.306870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.306901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.316809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.316902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.316928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.316949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.316964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.317002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.326824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.326908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.326934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.326949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.326962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.326992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.336825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.336923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.336950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.336964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.336979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.337009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.346874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.347003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.347032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.347050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.347064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.347095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.356944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.357042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.357068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.357083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.357096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.357126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.366939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.367060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.367089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.367104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.367118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.367148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.376946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.377027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.377052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.377066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.377079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.377109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.386965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.387052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.387078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.387092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.387105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.387135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.397015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.397147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.397173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.397187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.397201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.397230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.503 [2024-11-20 10:00:53.407035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.503 [2024-11-20 10:00:53.407151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.503 [2024-11-20 10:00:53.407178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.503 [2024-11-20 10:00:53.407198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.503 [2024-11-20 10:00:53.407212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.503 [2024-11-20 10:00:53.407242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.503 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.417167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.417253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.417278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.764 [2024-11-20 10:00:53.417293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.764 [2024-11-20 10:00:53.417315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.764 [2024-11-20 10:00:53.417348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.764 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.427092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.427183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.427212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.764 [2024-11-20 10:00:53.427229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.764 [2024-11-20 10:00:53.427242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.764 [2024-11-20 10:00:53.427273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.764 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.437134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.437256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.437282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.764 [2024-11-20 10:00:53.437297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.764 [2024-11-20 10:00:53.437321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.764 [2024-11-20 10:00:53.437353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.764 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.447175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.447307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.447334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.764 [2024-11-20 10:00:53.447348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.764 [2024-11-20 10:00:53.447361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.764 [2024-11-20 10:00:53.447398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.764 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.457190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.457281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.457313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.764 [2024-11-20 10:00:53.457329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.764 [2024-11-20 10:00:53.457342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.764 [2024-11-20 10:00:53.457373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.764 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.467228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.467340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.467367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.764 [2024-11-20 10:00:53.467381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.764 [2024-11-20 10:00:53.467394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.764 [2024-11-20 10:00:53.467426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.764 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.477253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.477353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.477380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.764 [2024-11-20 10:00:53.477394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.764 [2024-11-20 10:00:53.477407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.764 [2024-11-20 10:00:53.477438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.764 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.487283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.487379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.487405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.764 [2024-11-20 10:00:53.487420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.764 [2024-11-20 10:00:53.487433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.764 [2024-11-20 10:00:53.487463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.764 qpair failed and we were unable to recover it. 00:27:16.764 [2024-11-20 10:00:53.497290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.764 [2024-11-20 10:00:53.497411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.764 [2024-11-20 10:00:53.497437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.497452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.497465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.497495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.507417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.507549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.507573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.507587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.507601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.507633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.517382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.517494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.517519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.517533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.517547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.517577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.527390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.527480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.527505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.527520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.527532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.527563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.537379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.537457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.537487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.537503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.537517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.537546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.547465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.547579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.547606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.547620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.547634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.547663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.557493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.557577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.557602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.557617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.557630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.557660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.567508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.567597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.567623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.567637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.567651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.567682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.577599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.577688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.577716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.577732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.577750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.577783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.587563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.587646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.587672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.587686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.587699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.587730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.597594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.597681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.597708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.597722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.597735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.597767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.607644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.607768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.607794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.607808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.607822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.607852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.617711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.617794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.765 [2024-11-20 10:00:53.617820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.765 [2024-11-20 10:00:53.617835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.765 [2024-11-20 10:00:53.617848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.765 [2024-11-20 10:00:53.617878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.765 qpair failed and we were unable to recover it. 00:27:16.765 [2024-11-20 10:00:53.627639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.765 [2024-11-20 10:00:53.627733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.766 [2024-11-20 10:00:53.627766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.766 [2024-11-20 10:00:53.627786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.766 [2024-11-20 10:00:53.627800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.766 [2024-11-20 10:00:53.627831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.766 qpair failed and we were unable to recover it. 00:27:16.766 [2024-11-20 10:00:53.637717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.766 [2024-11-20 10:00:53.637833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.766 [2024-11-20 10:00:53.637859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.766 [2024-11-20 10:00:53.637873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.766 [2024-11-20 10:00:53.637886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.766 [2024-11-20 10:00:53.637919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.766 qpair failed and we were unable to recover it. 00:27:16.766 [2024-11-20 10:00:53.647703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.766 [2024-11-20 10:00:53.647796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.766 [2024-11-20 10:00:53.647821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.766 [2024-11-20 10:00:53.647836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.766 [2024-11-20 10:00:53.647849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.766 [2024-11-20 10:00:53.647879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.766 qpair failed and we were unable to recover it. 00:27:16.766 [2024-11-20 10:00:53.657762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.766 [2024-11-20 10:00:53.657888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.766 [2024-11-20 10:00:53.657914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.766 [2024-11-20 10:00:53.657928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.766 [2024-11-20 10:00:53.657942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.766 [2024-11-20 10:00:53.657973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.766 qpair failed and we were unable to recover it. 00:27:16.766 [2024-11-20 10:00:53.667803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.766 [2024-11-20 10:00:53.667886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.766 [2024-11-20 10:00:53.667920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.766 [2024-11-20 10:00:53.667936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.766 [2024-11-20 10:00:53.667950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:16.766 [2024-11-20 10:00:53.667980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.766 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 10:00:53.677885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.024 [2024-11-20 10:00:53.677982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.024 [2024-11-20 10:00:53.678009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.024 [2024-11-20 10:00:53.678023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.024 [2024-11-20 10:00:53.678036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.024 [2024-11-20 10:00:53.678068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 10:00:53.687862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.024 [2024-11-20 10:00:53.687993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.024 [2024-11-20 10:00:53.688020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.024 [2024-11-20 10:00:53.688034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.024 [2024-11-20 10:00:53.688047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.024 [2024-11-20 10:00:53.688077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 10:00:53.697863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.024 [2024-11-20 10:00:53.697945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.024 [2024-11-20 10:00:53.697972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.024 [2024-11-20 10:00:53.697986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.024 [2024-11-20 10:00:53.697999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.024 [2024-11-20 10:00:53.698029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 10:00:53.707921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.024 [2024-11-20 10:00:53.708044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.024 [2024-11-20 10:00:53.708073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.024 [2024-11-20 10:00:53.708088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.024 [2024-11-20 10:00:53.708108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.024 [2024-11-20 10:00:53.708140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 10:00:53.717983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.718125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.718150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.718164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.718178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.718208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.727974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.728069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.728099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.728115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.728128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.728173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.738018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.738107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.738133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.738148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.738161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.738191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.747997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.748102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.748128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.748142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.748155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.748186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.758034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.758157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.758186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.758201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.758214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.758244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.768148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.768237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.768263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.768277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.768290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.768328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.778082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.778185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.778211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.778225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.778239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.778269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.788156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.788239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.788264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.788278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.788291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.788329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.798145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.798235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.798266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.798281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.798294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.798333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.808184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.808316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.808349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.808363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.808376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.808407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.818167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.818249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.818275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.818290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.818309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.818341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.828219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.828310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.828337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.828351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.828364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.828408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.838242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.025 [2024-11-20 10:00:53.838344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.025 [2024-11-20 10:00:53.838373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.025 [2024-11-20 10:00:53.838395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.025 [2024-11-20 10:00:53.838410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.025 [2024-11-20 10:00:53.838442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 10:00:53.848281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.848402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.848429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.848444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.848457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.848489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 10:00:53.858321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.858407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.858436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.858452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.858465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.858496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 10:00:53.868315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.868408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.868435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.868449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.868462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.868492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 10:00:53.878369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.878473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.878499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.878514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.878526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.878562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 10:00:53.888400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.888530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.888557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.888571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.888585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.888615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 10:00:53.898408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.898499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.898528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.898546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.898560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.898592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 10:00:53.908504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.908616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.908642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.908656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.908670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.908701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 10:00:53.918508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.918615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.918641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.918655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.918668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.918711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 10:00:53.928515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.026 [2024-11-20 10:00:53.928602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.026 [2024-11-20 10:00:53.928626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.026 [2024-11-20 10:00:53.928640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.026 [2024-11-20 10:00:53.928652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.026 [2024-11-20 10:00:53.928682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:53.938550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:53.938646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:53.938673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:53.938689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:53.938702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:53.938735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:53.948526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:53.948629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:53.948655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:53.948669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:53.948682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:53.948713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:53.958654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:53.958776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:53.958801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:53.958816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:53.958828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:53.958858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:53.968641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:53.968719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:53.968744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:53.968765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:53.968779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:53.968810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:53.978641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:53.978728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:53.978758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:53.978773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:53.978786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:53.978816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:53.988643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:53.988722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:53.988748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:53.988762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:53.988776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:53.988807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:53.998685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:53.998777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:53.998802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:53.998816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:53.998831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:53.998863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:54.008755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:54.008839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:54.008868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:54.008884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:54.008897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:54.008934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:54.018732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:54.018852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:54.018878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:54.018893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:54.018906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:54.018949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:54.028809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:54.028900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:54.028926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:54.028941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:54.028954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.285 [2024-11-20 10:00:54.028985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.285 qpair failed and we were unable to recover it. 00:27:17.285 [2024-11-20 10:00:54.038819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.285 [2024-11-20 10:00:54.038910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.285 [2024-11-20 10:00:54.038936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.285 [2024-11-20 10:00:54.038951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.285 [2024-11-20 10:00:54.038964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.286 [2024-11-20 10:00:54.038995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.048812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.048896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.048922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.048936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.048950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:17.286 [2024-11-20 10:00:54.048982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.058848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.058960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.058991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.059007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.059021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.059054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.068882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.068965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.068992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.069007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.069020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.069051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.078935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.079024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.079050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.079064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.079077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.079107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.088929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.089017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.089044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.089059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.089072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.089101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.098979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.099063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.099095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.099111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.099124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.099153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.109009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.109127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.109153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.109168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.109181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.109211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.119042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.119133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.119159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.119174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.119187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.119216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.129069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.129155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.129180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.129195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.129208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.129241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.139092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.139215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.139240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.139255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.139268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.139311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.149191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.149332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.149359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.149373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.149387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.286 [2024-11-20 10:00:54.149417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.286 qpair failed and we were unable to recover it. 00:27:17.286 [2024-11-20 10:00:54.159128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.286 [2024-11-20 10:00:54.159224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.286 [2024-11-20 10:00:54.159253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.286 [2024-11-20 10:00:54.159270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.286 [2024-11-20 10:00:54.159282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.287 [2024-11-20 10:00:54.159322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 10:00:54.169152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.287 [2024-11-20 10:00:54.169240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.287 [2024-11-20 10:00:54.169267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.287 [2024-11-20 10:00:54.169281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.287 [2024-11-20 10:00:54.169294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.287 [2024-11-20 10:00:54.169329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 10:00:54.179213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.287 [2024-11-20 10:00:54.179338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.287 [2024-11-20 10:00:54.179365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.287 [2024-11-20 10:00:54.179379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.287 [2024-11-20 10:00:54.179392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.287 [2024-11-20 10:00:54.179421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.287 [2024-11-20 10:00:54.189209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.287 [2024-11-20 10:00:54.189294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.287 [2024-11-20 10:00:54.189327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.287 [2024-11-20 10:00:54.189342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.287 [2024-11-20 10:00:54.189356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.287 [2024-11-20 10:00:54.189385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.287 qpair failed and we were unable to recover it. 00:27:17.546 [2024-11-20 10:00:54.199271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.546 [2024-11-20 10:00:54.199370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.546 [2024-11-20 10:00:54.199396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.546 [2024-11-20 10:00:54.199410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.546 [2024-11-20 10:00:54.199423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.546 [2024-11-20 10:00:54.199453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.546 qpair failed and we were unable to recover it. 00:27:17.546 [2024-11-20 10:00:54.209363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.546 [2024-11-20 10:00:54.209463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.546 [2024-11-20 10:00:54.209488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.546 [2024-11-20 10:00:54.209502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.546 [2024-11-20 10:00:54.209515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.546 [2024-11-20 10:00:54.209546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.546 qpair failed and we were unable to recover it. 00:27:17.546 [2024-11-20 10:00:54.219298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.546 [2024-11-20 10:00:54.219409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.546 [2024-11-20 10:00:54.219435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.546 [2024-11-20 10:00:54.219450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.546 [2024-11-20 10:00:54.219463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.546 [2024-11-20 10:00:54.219493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.546 qpair failed and we were unable to recover it. 00:27:17.546 [2024-11-20 10:00:54.229355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.546 [2024-11-20 10:00:54.229460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.546 [2024-11-20 10:00:54.229491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.546 [2024-11-20 10:00:54.229506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.546 [2024-11-20 10:00:54.229519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.546 [2024-11-20 10:00:54.229548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.546 qpair failed and we were unable to recover it. 00:27:17.546 [2024-11-20 10:00:54.239357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.546 [2024-11-20 10:00:54.239447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.546 [2024-11-20 10:00:54.239473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.546 [2024-11-20 10:00:54.239487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.546 [2024-11-20 10:00:54.239500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.546 [2024-11-20 10:00:54.239529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.546 qpair failed and we were unable to recover it. 00:27:17.546 [2024-11-20 10:00:54.249374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.249462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.249487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.249502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.249515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.249544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.259505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.259589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.259614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.259628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.259641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.259671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.269422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.269553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.269579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.269593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.269612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.269642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.279513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.279628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.279653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.279668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.279681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.279710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.289501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.289587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.289613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.289627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.289640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.289669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.299514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.299598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.299625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.299639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.299652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.299681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.309541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.309646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.309672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.309686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.309699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.309730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.319598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.319695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.319724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.319740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.319754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.319784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.329629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.329737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.329763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.329778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.329791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.329820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.339629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.339705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.339731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.339745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.339758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.339787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.349646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.349730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.349756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.349771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.349784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.349813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.359706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.359796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.359827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.359842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.359855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.359884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.369717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.547 [2024-11-20 10:00:54.369808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.547 [2024-11-20 10:00:54.369834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.547 [2024-11-20 10:00:54.369849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.547 [2024-11-20 10:00:54.369862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.547 [2024-11-20 10:00:54.369892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.547 qpair failed and we were unable to recover it. 00:27:17.547 [2024-11-20 10:00:54.379732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.548 [2024-11-20 10:00:54.379859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.548 [2024-11-20 10:00:54.379885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.548 [2024-11-20 10:00:54.379899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.548 [2024-11-20 10:00:54.379913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.548 [2024-11-20 10:00:54.379942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.548 qpair failed and we were unable to recover it. 00:27:17.548 [2024-11-20 10:00:54.389762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.548 [2024-11-20 10:00:54.389844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.548 [2024-11-20 10:00:54.389869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.548 [2024-11-20 10:00:54.389884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.548 [2024-11-20 10:00:54.389897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.548 [2024-11-20 10:00:54.389927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.548 qpair failed and we were unable to recover it. 00:27:17.548 [2024-11-20 10:00:54.399836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.548 [2024-11-20 10:00:54.399925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.548 [2024-11-20 10:00:54.399951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.548 [2024-11-20 10:00:54.399965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.548 [2024-11-20 10:00:54.399983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.548 [2024-11-20 10:00:54.400013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.548 qpair failed and we were unable to recover it. 00:27:17.548 [2024-11-20 10:00:54.409821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.548 [2024-11-20 10:00:54.409905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.548 [2024-11-20 10:00:54.409931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.548 [2024-11-20 10:00:54.409945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.548 [2024-11-20 10:00:54.409958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.548 [2024-11-20 10:00:54.409989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.548 qpair failed and we were unable to recover it. 00:27:17.548 [2024-11-20 10:00:54.419938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.548 [2024-11-20 10:00:54.420026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.548 [2024-11-20 10:00:54.420052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.548 [2024-11-20 10:00:54.420066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.548 [2024-11-20 10:00:54.420078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.548 [2024-11-20 10:00:54.420108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.548 qpair failed and we were unable to recover it. 00:27:17.548 [2024-11-20 10:00:54.429868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.548 [2024-11-20 10:00:54.429990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.548 [2024-11-20 10:00:54.430015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.548 [2024-11-20 10:00:54.430029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.548 [2024-11-20 10:00:54.430042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.548 [2024-11-20 10:00:54.430073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.548 qpair failed and we were unable to recover it. 00:27:17.548 [2024-11-20 10:00:54.439944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.548 [2024-11-20 10:00:54.440037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.548 [2024-11-20 10:00:54.440063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.548 [2024-11-20 10:00:54.440077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.548 [2024-11-20 10:00:54.440090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.548 [2024-11-20 10:00:54.440118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.548 qpair failed and we were unable to recover it. 00:27:17.548 [2024-11-20 10:00:54.449924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.548 [2024-11-20 10:00:54.450020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.548 [2024-11-20 10:00:54.450045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.548 [2024-11-20 10:00:54.450058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.548 [2024-11-20 10:00:54.450071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.548 [2024-11-20 10:00:54.450101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.548 qpair failed and we were unable to recover it. 00:27:17.808 [2024-11-20 10:00:54.459950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.808 [2024-11-20 10:00:54.460068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.808 [2024-11-20 10:00:54.460093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.808 [2024-11-20 10:00:54.460108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.808 [2024-11-20 10:00:54.460121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.808 [2024-11-20 10:00:54.460150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.808 qpair failed and we were unable to recover it. 00:27:17.808 [2024-11-20 10:00:54.470002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.808 [2024-11-20 10:00:54.470117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.808 [2024-11-20 10:00:54.470142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.808 [2024-11-20 10:00:54.470157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.808 [2024-11-20 10:00:54.470170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.808 [2024-11-20 10:00:54.470199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.808 qpair failed and we were unable to recover it. 00:27:17.808 [2024-11-20 10:00:54.480037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.808 [2024-11-20 10:00:54.480133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.808 [2024-11-20 10:00:54.480159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.808 [2024-11-20 10:00:54.480173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.808 [2024-11-20 10:00:54.480186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.808 [2024-11-20 10:00:54.480215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.808 qpair failed and we were unable to recover it. 00:27:17.808 [2024-11-20 10:00:54.490077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.808 [2024-11-20 10:00:54.490162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.808 [2024-11-20 10:00:54.490198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.808 [2024-11-20 10:00:54.490216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.808 [2024-11-20 10:00:54.490229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.808 [2024-11-20 10:00:54.490259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.808 qpair failed and we were unable to recover it. 00:27:17.808 [2024-11-20 10:00:54.500087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.808 [2024-11-20 10:00:54.500170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.808 [2024-11-20 10:00:54.500196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.808 [2024-11-20 10:00:54.500210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.808 [2024-11-20 10:00:54.500224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.808 [2024-11-20 10:00:54.500252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.808 qpair failed and we were unable to recover it. 00:27:17.808 [2024-11-20 10:00:54.510129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.808 [2024-11-20 10:00:54.510262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.808 [2024-11-20 10:00:54.510288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.808 [2024-11-20 10:00:54.510309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.808 [2024-11-20 10:00:54.510324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.510354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.520128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.520215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.520240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.520254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.520267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.520296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.530164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.530251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.530276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.530290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.530317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.530348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.540173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.540267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.540292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.540313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.540328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.540357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.550260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.550367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.550392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.550407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.550420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.550451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.560293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.560391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.560416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.560430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.560442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.560471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.570359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.570444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.570469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.570483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.570497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.570526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.580290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.580386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.580412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.580426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.580440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.580470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.590361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.590450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.590475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.590490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.590503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.590532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.600387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.600489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.600514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.600528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.600541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.600572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.610380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.610471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.610496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.610510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.610523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.610552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.620409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.809 [2024-11-20 10:00:54.620498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.809 [2024-11-20 10:00:54.620528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.809 [2024-11-20 10:00:54.620543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.809 [2024-11-20 10:00:54.620555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.809 [2024-11-20 10:00:54.620585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.809 qpair failed and we were unable to recover it. 00:27:17.809 [2024-11-20 10:00:54.630446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.630567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.630594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.630608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.630621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.630650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:17.810 [2024-11-20 10:00:54.640512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.640600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.640626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.640640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.640654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.640682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:17.810 [2024-11-20 10:00:54.650523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.650609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.650635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.650649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.650662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.650691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:17.810 [2024-11-20 10:00:54.660567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.660690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.660715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.660730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.660749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.660779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:17.810 [2024-11-20 10:00:54.670583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.670660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.670686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.670701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.670713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.670742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:17.810 [2024-11-20 10:00:54.680638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.680775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.680801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.680816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.680829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.680858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:17.810 [2024-11-20 10:00:54.690649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.690765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.690790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.690805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.690818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.690847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:17.810 [2024-11-20 10:00:54.700667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.700748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.700773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.700788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.700801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.700830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:17.810 [2024-11-20 10:00:54.710692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.810 [2024-11-20 10:00:54.710786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.810 [2024-11-20 10:00:54.710812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.810 [2024-11-20 10:00:54.710826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.810 [2024-11-20 10:00:54.710839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:17.810 [2024-11-20 10:00:54.710868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.810 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.720703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.720802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.720827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.720842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.720855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.720884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.730714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.730799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.730825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.730839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.730852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.730881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.740760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.740887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.740913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.740927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.740940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.740970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.750797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.750928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.750959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.750975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.750987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.751017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.760861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.760947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.760973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.760987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.761000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.761029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.770864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.770989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.771015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.771029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.771042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.771071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.780925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.781041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.781067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.781081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.781094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.781123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.790902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.791028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.791053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.791068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.791087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.791117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.800936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.801028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.801053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.801068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.801081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.801109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.810978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.811070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.811099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.811115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.070 [2024-11-20 10:00:54.811129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.070 [2024-11-20 10:00:54.811159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.070 qpair failed and we were unable to recover it. 00:27:18.070 [2024-11-20 10:00:54.820976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.070 [2024-11-20 10:00:54.821062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.070 [2024-11-20 10:00:54.821087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.070 [2024-11-20 10:00:54.821101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.821114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.821144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.831029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.831119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.831144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.831159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.831172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.831201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.841084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.841180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.841206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.841220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.841233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.841262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.851112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.851196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.851223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.851237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.851250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.851278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.861097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.861181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.861206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.861221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.861234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.861262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.871172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.871270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.871296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.871320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.871334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.871364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.881258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.881372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.881403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.881418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.881430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.881459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.891202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.891307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.891333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.891347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.891360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.891389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.901197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.901328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.901353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.901367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.901380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.901410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.911241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.911330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.911357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.911371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.911384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.911416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.921315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.921417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.921443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.921457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.921476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.921506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.931289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.931387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.931411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.931425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.931437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.931467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.941342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.941470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.941495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.071 [2024-11-20 10:00:54.941509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.071 [2024-11-20 10:00:54.941523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.071 [2024-11-20 10:00:54.941552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.071 qpair failed and we were unable to recover it. 00:27:18.071 [2024-11-20 10:00:54.951375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.071 [2024-11-20 10:00:54.951466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.071 [2024-11-20 10:00:54.951496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.072 [2024-11-20 10:00:54.951512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.072 [2024-11-20 10:00:54.951525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.072 [2024-11-20 10:00:54.951556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-11-20 10:00:54.961509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.072 [2024-11-20 10:00:54.961602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.072 [2024-11-20 10:00:54.961628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.072 [2024-11-20 10:00:54.961642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.072 [2024-11-20 10:00:54.961655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.072 [2024-11-20 10:00:54.961685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.072 [2024-11-20 10:00:54.971444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.072 [2024-11-20 10:00:54.971531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.072 [2024-11-20 10:00:54.971560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.072 [2024-11-20 10:00:54.971576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.072 [2024-11-20 10:00:54.971590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.072 [2024-11-20 10:00:54.971620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.072 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 10:00:54.981459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.331 [2024-11-20 10:00:54.981586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.331 [2024-11-20 10:00:54.981612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.331 [2024-11-20 10:00:54.981626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.331 [2024-11-20 10:00:54.981640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.331 [2024-11-20 10:00:54.981669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 10:00:54.991550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.331 [2024-11-20 10:00:54.991650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.331 [2024-11-20 10:00:54.991676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.331 [2024-11-20 10:00:54.991690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.331 [2024-11-20 10:00:54.991703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.331 [2024-11-20 10:00:54.991732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 10:00:55.001521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.331 [2024-11-20 10:00:55.001609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.331 [2024-11-20 10:00:55.001635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.331 [2024-11-20 10:00:55.001650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.331 [2024-11-20 10:00:55.001662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.331 [2024-11-20 10:00:55.001692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 10:00:55.011670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.331 [2024-11-20 10:00:55.011796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.331 [2024-11-20 10:00:55.011827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.331 [2024-11-20 10:00:55.011842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.331 [2024-11-20 10:00:55.011855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.331 [2024-11-20 10:00:55.011884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 10:00:55.021609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.331 [2024-11-20 10:00:55.021694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.331 [2024-11-20 10:00:55.021723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.331 [2024-11-20 10:00:55.021739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.331 [2024-11-20 10:00:55.021752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.331 [2024-11-20 10:00:55.021783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 10:00:55.031637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.331 [2024-11-20 10:00:55.031727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.331 [2024-11-20 10:00:55.031755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.331 [2024-11-20 10:00:55.031771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.331 [2024-11-20 10:00:55.031785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.331 [2024-11-20 10:00:55.031816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.331 qpair failed and we were unable to recover it. 00:27:18.331 [2024-11-20 10:00:55.041647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.041738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.041765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.041779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.041792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.041821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.051652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.051755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.051781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.051795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.051813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.051845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.061764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.061846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.061872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.061886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.061899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.061930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.071747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.071831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.071857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.071871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.071884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.071913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.081904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.082003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.082028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.082042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.082056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.082084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.091855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.091940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.091965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.091979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.091992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.092021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.101845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.101930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.101955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.101969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.101982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.102010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.112010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.112097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.112122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.112137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.112150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.112179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.121901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.122020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.122046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.122060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.122073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.122102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.131908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.132014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.132039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.132054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.132067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.132095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.141909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.141987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.142021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.142036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.142049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.142080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.152019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.332 [2024-11-20 10:00:55.152101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.332 [2024-11-20 10:00:55.152126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.332 [2024-11-20 10:00:55.152140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.332 [2024-11-20 10:00:55.152154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.332 [2024-11-20 10:00:55.152183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.332 qpair failed and we were unable to recover it. 00:27:18.332 [2024-11-20 10:00:55.161974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.333 [2024-11-20 10:00:55.162063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.333 [2024-11-20 10:00:55.162088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.333 [2024-11-20 10:00:55.162103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.333 [2024-11-20 10:00:55.162115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.333 [2024-11-20 10:00:55.162144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 10:00:55.172005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.333 [2024-11-20 10:00:55.172090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.333 [2024-11-20 10:00:55.172115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.333 [2024-11-20 10:00:55.172130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.333 [2024-11-20 10:00:55.172143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.333 [2024-11-20 10:00:55.172172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 10:00:55.182025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.333 [2024-11-20 10:00:55.182103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.333 [2024-11-20 10:00:55.182129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.333 [2024-11-20 10:00:55.182144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.333 [2024-11-20 10:00:55.182164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.333 [2024-11-20 10:00:55.182194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 10:00:55.192071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.333 [2024-11-20 10:00:55.192155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.333 [2024-11-20 10:00:55.192181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.333 [2024-11-20 10:00:55.192195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.333 [2024-11-20 10:00:55.192208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.333 [2024-11-20 10:00:55.192237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 10:00:55.202129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.333 [2024-11-20 10:00:55.202231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.333 [2024-11-20 10:00:55.202257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.333 [2024-11-20 10:00:55.202272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.333 [2024-11-20 10:00:55.202284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.333 [2024-11-20 10:00:55.202320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 10:00:55.212147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.333 [2024-11-20 10:00:55.212237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.333 [2024-11-20 10:00:55.212266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.333 [2024-11-20 10:00:55.212282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.333 [2024-11-20 10:00:55.212295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.333 [2024-11-20 10:00:55.212341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 10:00:55.222141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.333 [2024-11-20 10:00:55.222228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.333 [2024-11-20 10:00:55.222253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.333 [2024-11-20 10:00:55.222267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.333 [2024-11-20 10:00:55.222280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.333 [2024-11-20 10:00:55.222316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.333 [2024-11-20 10:00:55.232167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.333 [2024-11-20 10:00:55.232260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.333 [2024-11-20 10:00:55.232285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.333 [2024-11-20 10:00:55.232299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.333 [2024-11-20 10:00:55.232323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.333 [2024-11-20 10:00:55.232354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.333 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 10:00:55.242241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.592 [2024-11-20 10:00:55.242366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.592 [2024-11-20 10:00:55.242395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.592 [2024-11-20 10:00:55.242411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.592 [2024-11-20 10:00:55.242424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.592 [2024-11-20 10:00:55.242454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 10:00:55.252295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.592 [2024-11-20 10:00:55.252443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.592 [2024-11-20 10:00:55.252469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.592 [2024-11-20 10:00:55.252483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.592 [2024-11-20 10:00:55.252496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.592 [2024-11-20 10:00:55.252525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 10:00:55.262300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.592 [2024-11-20 10:00:55.262393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.592 [2024-11-20 10:00:55.262419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.592 [2024-11-20 10:00:55.262434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.592 [2024-11-20 10:00:55.262447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.592 [2024-11-20 10:00:55.262476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 10:00:55.272345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.592 [2024-11-20 10:00:55.272452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.592 [2024-11-20 10:00:55.272483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.592 [2024-11-20 10:00:55.272499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.592 [2024-11-20 10:00:55.272512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.592 [2024-11-20 10:00:55.272542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.592 qpair failed and we were unable to recover it. 00:27:18.592 [2024-11-20 10:00:55.282322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.592 [2024-11-20 10:00:55.282454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.592 [2024-11-20 10:00:55.282480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.592 [2024-11-20 10:00:55.282494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.282508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.282537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.292353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.292439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.292465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.292479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.292492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.292521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.302371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.302501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.302526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.302540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.302553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.302584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.312482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.312568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.312594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.312609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.312627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.312657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.322463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.322552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.322579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.322593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.322605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.322635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.332451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.332538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.332564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.332578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.332591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.332620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.342484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.342605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.342630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.342644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.342657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.342686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.352506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.352595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.352622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.352636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.352650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.352680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.362547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.362639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.362665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.362679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.362692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.362722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.372610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.372739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.372764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.372779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.372791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.372821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.382627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.382707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.382732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.382746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.382759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.382789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.392714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.392797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.392823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.392836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.392849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.392878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.402689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.402775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.402807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.402822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.402835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.593 [2024-11-20 10:00:55.402864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.593 qpair failed and we were unable to recover it. 00:27:18.593 [2024-11-20 10:00:55.412711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.593 [2024-11-20 10:00:55.412798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.593 [2024-11-20 10:00:55.412824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.593 [2024-11-20 10:00:55.412838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.593 [2024-11-20 10:00:55.412851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.412880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.422784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.422861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.422887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.422901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.422914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.422943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.432722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.432803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.432829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.432843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.432855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.432887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.442749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.442837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.442862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.442876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.442895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.442925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.452814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.452891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.452916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.452931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.452943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.452972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.462815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.462896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.462922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.462936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.462949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.462977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.472835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.472920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.472945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.472960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.472973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.473002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.482920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.483013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.483038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.483052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.483066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.483094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.492953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.493040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.493065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.493079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.493092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.493121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.594 [2024-11-20 10:00:55.502968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.594 [2024-11-20 10:00:55.503056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.594 [2024-11-20 10:00:55.503081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.594 [2024-11-20 10:00:55.503095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.594 [2024-11-20 10:00:55.503108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.594 [2024-11-20 10:00:55.503139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.594 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.513013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.513113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.513142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.513159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.513172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.513202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.523003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.523090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.523116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.523130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.523143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.523171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.533044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.533127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.533158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.533173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.533186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.533216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.543063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.543150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.543176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.543191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.543204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.543233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.553084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.553165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.553191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.553206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.553219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.553248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.563144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.563254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.563279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.563293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.563313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.563344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.573182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.573269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.573294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.573317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.573336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.573367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.583273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.583367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.583392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.583406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.583418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.583447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.593266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.593369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.593395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.593409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.593422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.593451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.603243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.603330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.603355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.603369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.603382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.603411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.613291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.613384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.613408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.613423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.613436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.853 [2024-11-20 10:00:55.613465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.853 qpair failed and we were unable to recover it. 00:27:18.853 [2024-11-20 10:00:55.623391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.853 [2024-11-20 10:00:55.623477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.853 [2024-11-20 10:00:55.623502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.853 [2024-11-20 10:00:55.623516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.853 [2024-11-20 10:00:55.623529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.623558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.633377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.633464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.633491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.633509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.633523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.633555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.643363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.643455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.643480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.643494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.643507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.643536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.653408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.653504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.653530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.653544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.653557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.653588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.663393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.663488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.663519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.663534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.663547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.663576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.673453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.673543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.673569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.673584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.673597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.673627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.683496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.683582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.683608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.683622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.683635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.683666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.693610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.693709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.693738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.693755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.693768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.693798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.703591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.703673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.703699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.703712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.703731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.703761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.713557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.713649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.713675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.713690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.713703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.713734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.723710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.723801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.723827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.723841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.723854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.723885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.733642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.733743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.733768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.733783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.733796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.733825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.743736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.743821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.743847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.743861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.854 [2024-11-20 10:00:55.743874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.854 [2024-11-20 10:00:55.743903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.854 qpair failed and we were unable to recover it. 00:27:18.854 [2024-11-20 10:00:55.753676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.854 [2024-11-20 10:00:55.753766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.854 [2024-11-20 10:00:55.753791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.854 [2024-11-20 10:00:55.753806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.855 [2024-11-20 10:00:55.753819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.855 [2024-11-20 10:00:55.753849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.855 qpair failed and we were unable to recover it. 00:27:18.855 [2024-11-20 10:00:55.763730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.855 [2024-11-20 10:00:55.763824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.855 [2024-11-20 10:00:55.763852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.855 [2024-11-20 10:00:55.763868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.855 [2024-11-20 10:00:55.763882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:18.855 [2024-11-20 10:00:55.763912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:18.855 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.773848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.773935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.773960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.773974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.773987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.774018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.783903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.784033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.784059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.784073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.784086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.784116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.793785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.793879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.793913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.793928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.793941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.793971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.803858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.803947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.803972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.803987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.803999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.804028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.813898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.814022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.814048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.814063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.814076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.814105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.823877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.823968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.823994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.824008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.824021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.824050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.833898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.833983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.834009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.834023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.834042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.834073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.843941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.844030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.844055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.844069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.844082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.844111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.853982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.854066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.854092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.854106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.854118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.854148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.863996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.864081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.864109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.864127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.864140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.864170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.874041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.874129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.874156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.874170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.874184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.874214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.884070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.884178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.114 [2024-11-20 10:00:55.884204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.114 [2024-11-20 10:00:55.884219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.114 [2024-11-20 10:00:55.884231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.114 [2024-11-20 10:00:55.884263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.114 qpair failed and we were unable to recover it. 00:27:19.114 [2024-11-20 10:00:55.894054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.114 [2024-11-20 10:00:55.894172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.894198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.894213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.894227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.894256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.904212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.904300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.904335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.904350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.904364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.904393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.914165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.914268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.914294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.914317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.914331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.914363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.924171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.924265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.924299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.924331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.924344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.924375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.934178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.934292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.934324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.934340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.934352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.934381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.944235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.944331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.944357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.944372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.944385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.944414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.954258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.954353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.954379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.954394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.954407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.954437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.964368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.964457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.964483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.964503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.964518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.964547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.974323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.974434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.974459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.974473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.974487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.974518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.984328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.984414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.984439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.984453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.984466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.984495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:55.994364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:55.994452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:55.994477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:55.994491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:55.994504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:55.994533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:56.004412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:56.004505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:56.004530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:56.004545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:56.004558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:56.004587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:56.014429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.115 [2024-11-20 10:00:56.014521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.115 [2024-11-20 10:00:56.014547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.115 [2024-11-20 10:00:56.014561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.115 [2024-11-20 10:00:56.014574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.115 [2024-11-20 10:00:56.014603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.115 qpair failed and we were unable to recover it. 00:27:19.115 [2024-11-20 10:00:56.024457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.116 [2024-11-20 10:00:56.024543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.116 [2024-11-20 10:00:56.024569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.116 [2024-11-20 10:00:56.024584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.116 [2024-11-20 10:00:56.024597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.116 [2024-11-20 10:00:56.024626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.116 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 10:00:56.034487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 10:00:56.034578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 10:00:56.034603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 10:00:56.034618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 10:00:56.034631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.374 [2024-11-20 10:00:56.034661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 10:00:56.044503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 10:00:56.044591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 10:00:56.044617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 10:00:56.044632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 10:00:56.044644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.374 [2024-11-20 10:00:56.044673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 10:00:56.054519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 10:00:56.054604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.054635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.054650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.054662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.054691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.064576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.064665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.064690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.064704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.064717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.064745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.074572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.074658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.074685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.074699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.074712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.074741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.084656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.084748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.084774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.084788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.084801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.084831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.094696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.094792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.094818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.094839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.094853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.094883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.104682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.104770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.104795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.104809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.104822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.104851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.114773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.114862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.114888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.114902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.114915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.114944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.124730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.124817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.124841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.124856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.124868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.124897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.134751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.134841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.134866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.134881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.134894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.134922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.144819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.144900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.144925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.144940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.144953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.144981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.154788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.154874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.154900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.154914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.154927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.154957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.164886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.164973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.164999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.165013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.165026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.165055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.174851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.174968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.174994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 10:00:56.175008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 10:00:56.175021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.375 [2024-11-20 10:00:56.175051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 10:00:56.184886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 10:00:56.184966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 10:00:56.184997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.185012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.185025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.185056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.194940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.195071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.195096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.195110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.195123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.195154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.204970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.205061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.205089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.205106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.205120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.205150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.215010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.215094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.215120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.215134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.215147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.215177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.225010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.225097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.225123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.225143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.225157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.225186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.235040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.235129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.235155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.235169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.235183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.235213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.245165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.245259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.245285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.245299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.245319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.245350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.255187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.255325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.255351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.255366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.255379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.255408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.265108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.265188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.265213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.265227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.265240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.265269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.275149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.275247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.275272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.275287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.275300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.275341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.376 [2024-11-20 10:00:56.285193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.376 [2024-11-20 10:00:56.285278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.376 [2024-11-20 10:00:56.285312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.376 [2024-11-20 10:00:56.285330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.376 [2024-11-20 10:00:56.285345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.376 [2024-11-20 10:00:56.285375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.376 qpair failed and we were unable to recover it. 00:27:19.635 [2024-11-20 10:00:56.295271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.635 [2024-11-20 10:00:56.295372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.635 [2024-11-20 10:00:56.295397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.635 [2024-11-20 10:00:56.295412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.635 [2024-11-20 10:00:56.295425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.635 [2024-11-20 10:00:56.295454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-11-20 10:00:56.305266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.635 [2024-11-20 10:00:56.305370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.635 [2024-11-20 10:00:56.305399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.635 [2024-11-20 10:00:56.305416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.635 [2024-11-20 10:00:56.305429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.635 [2024-11-20 10:00:56.305460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-11-20 10:00:56.315331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.635 [2024-11-20 10:00:56.315412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.635 [2024-11-20 10:00:56.315443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.635 [2024-11-20 10:00:56.315459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.635 [2024-11-20 10:00:56.315472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.635 [2024-11-20 10:00:56.315502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-11-20 10:00:56.325337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.635 [2024-11-20 10:00:56.325479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.635 [2024-11-20 10:00:56.325505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.635 [2024-11-20 10:00:56.325520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.635 [2024-11-20 10:00:56.325533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.635 [2024-11-20 10:00:56.325565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-11-20 10:00:56.335342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.635 [2024-11-20 10:00:56.335465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.635 [2024-11-20 10:00:56.335495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.635 [2024-11-20 10:00:56.335512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.635 [2024-11-20 10:00:56.335525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.635 [2024-11-20 10:00:56.335557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-11-20 10:00:56.345384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.635 [2024-11-20 10:00:56.345493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.635 [2024-11-20 10:00:56.345520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.635 [2024-11-20 10:00:56.345534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.635 [2024-11-20 10:00:56.345546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.635 [2024-11-20 10:00:56.345575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.635 qpair failed and we were unable to recover it. 00:27:19.635 [2024-11-20 10:00:56.355396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.635 [2024-11-20 10:00:56.355478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.635 [2024-11-20 10:00:56.355504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.635 [2024-11-20 10:00:56.355525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.635 [2024-11-20 10:00:56.355538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.355570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.365449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.365553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.365578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.365592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.365605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.365634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.375445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.375568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.375602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.375617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.375630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.375658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.385454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.385544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.385568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.385583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.385596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.385624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.395483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.395564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.395589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.395604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.395617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.395647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.405550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.405644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.405669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.405684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.405697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.405726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.415552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.415643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.415669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.415683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.415696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.415725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.425555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.425637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.425663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.425677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.425690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.425718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.435588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.435680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.435705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.435720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.435733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.435762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.445648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.445744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.445769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.445783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.445796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.445825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.455716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.455825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.455850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.455864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.455877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.455906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.465688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.465806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.465831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.465845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.465859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.465887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.475719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.475804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.475829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.475843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.475856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.636 [2024-11-20 10:00:56.475885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 10:00:56.485757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 10:00:56.485844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 10:00:56.485869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 10:00:56.485892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 10:00:56.485906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.637 [2024-11-20 10:00:56.485935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 10:00:56.495780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 10:00:56.495868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 10:00:56.495893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 10:00:56.495907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 10:00:56.495920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.637 [2024-11-20 10:00:56.495949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 10:00:56.505795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 10:00:56.505878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 10:00:56.505903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 10:00:56.505918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 10:00:56.505931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.637 [2024-11-20 10:00:56.505962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 10:00:56.515808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 10:00:56.515890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 10:00:56.515916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 10:00:56.515931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 10:00:56.515944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.637 [2024-11-20 10:00:56.515973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 10:00:56.525899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 10:00:56.525986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 10:00:56.526011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 10:00:56.526026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 10:00:56.526038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.637 [2024-11-20 10:00:56.526066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 10:00:56.535882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 10:00:56.535970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 10:00:56.535995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 10:00:56.536009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 10:00:56.536022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.637 [2024-11-20 10:00:56.536052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 10:00:56.545955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 10:00:56.546041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 10:00:56.546066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 10:00:56.546081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 10:00:56.546093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.637 [2024-11-20 10:00:56.546123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.896 [2024-11-20 10:00:56.555982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.896 [2024-11-20 10:00:56.556106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.896 [2024-11-20 10:00:56.556132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.896 [2024-11-20 10:00:56.556147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.896 [2024-11-20 10:00:56.556159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.896 [2024-11-20 10:00:56.556189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-11-20 10:00:56.565988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.896 [2024-11-20 10:00:56.566081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.896 [2024-11-20 10:00:56.566107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.896 [2024-11-20 10:00:56.566121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.896 [2024-11-20 10:00:56.566134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.896 [2024-11-20 10:00:56.566163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-11-20 10:00:56.576075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.896 [2024-11-20 10:00:56.576202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.896 [2024-11-20 10:00:56.576227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.896 [2024-11-20 10:00:56.576242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.896 [2024-11-20 10:00:56.576255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.896 [2024-11-20 10:00:56.576284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-11-20 10:00:56.586066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.896 [2024-11-20 10:00:56.586152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.896 [2024-11-20 10:00:56.586177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.896 [2024-11-20 10:00:56.586191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.896 [2024-11-20 10:00:56.586204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.896 [2024-11-20 10:00:56.586233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.596042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.596127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.596152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.596166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.596179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.596208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.606114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.606244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.606269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.606283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.606296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.606334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.616168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.616264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.616292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.616324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.616339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.616369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.626157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.626238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.626264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.626278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.626292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.626328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.636191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.636276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.636310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.636327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.636342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.636373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.646229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.646327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.646353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.646367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.646381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.646411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.656234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.656327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.656353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.656367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.656381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.656417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.666264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.666380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.666405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.666419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.666432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.666461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.676320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.676414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.676439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.676454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.676466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.676496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.686323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.686412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.686438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.686452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.686465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.686494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.696379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.696471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.696496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.696510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.696523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.696552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.706397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.706507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.706533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.897 [2024-11-20 10:00:56.706547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.897 [2024-11-20 10:00:56.706562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.897 [2024-11-20 10:00:56.706590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-11-20 10:00:56.716431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.897 [2024-11-20 10:00:56.716518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.897 [2024-11-20 10:00:56.716543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.716557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.716569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.716599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.726484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.726570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.726595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.726609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.726623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.726651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.736489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.736579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.736605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.736619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.736632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.736660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.746493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.746625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.746650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.746670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.746684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.746713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.756544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.756638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.756664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.756678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.756691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.756720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.766583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.766674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.766699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.766714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.766726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.766754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.776603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.776724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.776749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.776763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.776776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.776805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.786611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.786699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.786724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.786739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.786752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.786786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.796711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.796806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.796831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.796845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.796861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.796890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-11-20 10:00:56.806701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.898 [2024-11-20 10:00:56.806788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.898 [2024-11-20 10:00:56.806814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.898 [2024-11-20 10:00:56.806827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.898 [2024-11-20 10:00:56.806840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:19.898 [2024-11-20 10:00:56.806869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:19.898 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.816697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.816776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.816801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.816815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.816828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.816856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.826779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.826875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.826900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.826915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.826928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.826956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.836774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.836861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.836887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.836902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.836914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.836943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.846843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.846932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.846958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.846973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.846986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.847017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.856811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.856892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.856918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.856932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.856946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.856975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.866829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.866927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.866952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.866966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.866979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.867009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.876901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.876985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.877011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.877031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.877045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.877074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.886949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.887053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.887079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.887092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.887104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.887133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.896949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.897082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.897108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.897122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.897135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.897164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.906963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.907046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.907070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.907084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.907098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.907126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.917016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.917109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.917133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.917148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.917161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.917196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.927058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.927151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.158 [2024-11-20 10:00:56.927177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.158 [2024-11-20 10:00:56.927191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.158 [2024-11-20 10:00:56.927204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.158 [2024-11-20 10:00:56.927234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.158 qpair failed and we were unable to recover it. 00:27:20.158 [2024-11-20 10:00:56.937072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.158 [2024-11-20 10:00:56.937161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:56.937185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:56.937199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:56.937211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:56.937239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:56.947104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:56.947200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:56.947225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:56.947240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:56.947252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:56.947281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:56.957097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:56.957184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:56.957209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:56.957223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:56.957237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:56.957266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:56.967182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:56.967283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:56.967315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:56.967331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:56.967347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:56.967378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:56.977192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:56.977278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:56.977310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:56.977327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:56.977340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:56.977369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:56.987197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:56.987275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:56.987299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:56.987325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:56.987339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:56.987370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:56.997322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:56.997419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:56.997444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:56.997458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:56.997471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:56.997501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:57.007262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:57.007362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:57.007387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:57.007407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:57.007421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:57.007450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:57.017267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:57.017379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:57.017405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:57.017419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:57.017432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:57.017461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:57.027348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:57.027438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:57.027464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:57.027478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:57.027491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:57.027520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:57.037348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:57.037430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:57.037457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:57.037471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:57.037485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:57.037516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:57.047386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:57.047477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:57.047504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:57.047519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:57.047532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:57.047568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:57.057403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:57.057485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:57.057510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.159 [2024-11-20 10:00:57.057525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.159 [2024-11-20 10:00:57.057538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.159 [2024-11-20 10:00:57.057567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.159 qpair failed and we were unable to recover it. 00:27:20.159 [2024-11-20 10:00:57.067466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.159 [2024-11-20 10:00:57.067558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.159 [2024-11-20 10:00:57.067584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.160 [2024-11-20 10:00:57.067598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.160 [2024-11-20 10:00:57.067612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.160 [2024-11-20 10:00:57.067641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.160 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.077483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.077568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.077594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.418 [2024-11-20 10:00:57.077608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.418 [2024-11-20 10:00:57.077622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.418 [2024-11-20 10:00:57.077651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.418 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.087591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.087686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.087711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.418 [2024-11-20 10:00:57.087725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.418 [2024-11-20 10:00:57.087738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.418 [2024-11-20 10:00:57.087767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.418 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.097520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.097617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.097643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.418 [2024-11-20 10:00:57.097657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.418 [2024-11-20 10:00:57.097670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.418 [2024-11-20 10:00:57.097700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.418 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.107617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.107712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.107739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.418 [2024-11-20 10:00:57.107753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.418 [2024-11-20 10:00:57.107766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.418 [2024-11-20 10:00:57.107795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.418 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.117651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.117783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.117812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.418 [2024-11-20 10:00:57.117829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.418 [2024-11-20 10:00:57.117842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.418 [2024-11-20 10:00:57.117873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.418 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.127690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.127782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.127808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.418 [2024-11-20 10:00:57.127823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.418 [2024-11-20 10:00:57.127835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.418 [2024-11-20 10:00:57.127865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.418 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.137660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.137749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.137775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.418 [2024-11-20 10:00:57.137795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.418 [2024-11-20 10:00:57.137809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.418 [2024-11-20 10:00:57.137838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.418 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.147679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.147757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.147782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.418 [2024-11-20 10:00:57.147796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.418 [2024-11-20 10:00:57.147809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.418 [2024-11-20 10:00:57.147838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.418 qpair failed and we were unable to recover it. 00:27:20.418 [2024-11-20 10:00:57.157760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.418 [2024-11-20 10:00:57.157845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.418 [2024-11-20 10:00:57.157870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.157884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.157897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.157926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.167733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.167826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.167852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.167866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.167879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.167908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.177788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.177918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.177943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.177957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.177970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.178005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.187779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.187858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.187884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.187898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.187911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.187940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.197844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.197928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.197953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.197968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.197981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.198010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.207883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.207971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.207997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.208010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.208023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.208054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.217918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.218051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.218076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.218091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.218103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.218132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.227929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.228043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.228068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.228082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.228095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.228124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.237930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.238019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.238045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.238059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.238072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.238101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.247956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.248043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.248069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.248083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.248096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.248125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.257988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.258073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.258099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.258113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.258126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.258157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.268060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.268144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.268169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.268190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.268203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.268232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.278054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.278140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.278166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.419 [2024-11-20 10:00:57.278180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.419 [2024-11-20 10:00:57.278193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.419 [2024-11-20 10:00:57.278224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.419 qpair failed and we were unable to recover it. 00:27:20.419 [2024-11-20 10:00:57.288131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.419 [2024-11-20 10:00:57.288218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.419 [2024-11-20 10:00:57.288244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.420 [2024-11-20 10:00:57.288258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.420 [2024-11-20 10:00:57.288270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.420 [2024-11-20 10:00:57.288311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.420 qpair failed and we were unable to recover it. 00:27:20.420 [2024-11-20 10:00:57.298126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.420 [2024-11-20 10:00:57.298210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.420 [2024-11-20 10:00:57.298236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.420 [2024-11-20 10:00:57.298251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.420 [2024-11-20 10:00:57.298264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.420 [2024-11-20 10:00:57.298293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.420 qpair failed and we were unable to recover it. 00:27:20.420 [2024-11-20 10:00:57.308188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.420 [2024-11-20 10:00:57.308286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.420 [2024-11-20 10:00:57.308323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.420 [2024-11-20 10:00:57.308339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.420 [2024-11-20 10:00:57.308352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.420 [2024-11-20 10:00:57.308387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.420 qpair failed and we were unable to recover it. 00:27:20.420 [2024-11-20 10:00:57.318174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.420 [2024-11-20 10:00:57.318256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.420 [2024-11-20 10:00:57.318282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.420 [2024-11-20 10:00:57.318297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.420 [2024-11-20 10:00:57.318321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.420 [2024-11-20 10:00:57.318352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.420 qpair failed and we were unable to recover it. 00:27:20.420 [2024-11-20 10:00:57.328194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.420 [2024-11-20 10:00:57.328283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.420 [2024-11-20 10:00:57.328316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.420 [2024-11-20 10:00:57.328332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.420 [2024-11-20 10:00:57.328345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.420 [2024-11-20 10:00:57.328375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.420 qpair failed and we were unable to recover it. 00:27:20.678 [2024-11-20 10:00:57.338218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.678 [2024-11-20 10:00:57.338316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.678 [2024-11-20 10:00:57.338343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.678 [2024-11-20 10:00:57.338358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.678 [2024-11-20 10:00:57.338371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.678 [2024-11-20 10:00:57.338400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.678 qpair failed and we were unable to recover it. 00:27:20.678 [2024-11-20 10:00:57.348255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.678 [2024-11-20 10:00:57.348343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.678 [2024-11-20 10:00:57.348368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.678 [2024-11-20 10:00:57.348382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.678 [2024-11-20 10:00:57.348394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.678 [2024-11-20 10:00:57.348424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.678 qpair failed and we were unable to recover it. 00:27:20.678 [2024-11-20 10:00:57.358295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.678 [2024-11-20 10:00:57.358407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.678 [2024-11-20 10:00:57.358433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.678 [2024-11-20 10:00:57.358448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.678 [2024-11-20 10:00:57.358461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.678 [2024-11-20 10:00:57.358491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.678 qpair failed and we were unable to recover it. 00:27:20.678 [2024-11-20 10:00:57.368323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.678 [2024-11-20 10:00:57.368464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.678 [2024-11-20 10:00:57.368489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.678 [2024-11-20 10:00:57.368503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.678 [2024-11-20 10:00:57.368517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.678 [2024-11-20 10:00:57.368548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.678 qpair failed and we were unable to recover it. 00:27:20.678 [2024-11-20 10:00:57.378353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.678 [2024-11-20 10:00:57.378447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.678 [2024-11-20 10:00:57.378473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.678 [2024-11-20 10:00:57.378487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.678 [2024-11-20 10:00:57.378500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.678 [2024-11-20 10:00:57.378530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.678 qpair failed and we were unable to recover it. 00:27:20.678 [2024-11-20 10:00:57.388364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.678 [2024-11-20 10:00:57.388476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.678 [2024-11-20 10:00:57.388501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.678 [2024-11-20 10:00:57.388515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.678 [2024-11-20 10:00:57.388527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.678 [2024-11-20 10:00:57.388557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.678 qpair failed and we were unable to recover it. 00:27:20.678 [2024-11-20 10:00:57.398382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.678 [2024-11-20 10:00:57.398463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.678 [2024-11-20 10:00:57.398488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.678 [2024-11-20 10:00:57.398509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.678 [2024-11-20 10:00:57.398522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.398554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.408442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.408535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.408561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.408575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.408587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.408617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.418474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.418597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.418623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.418637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.418651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.418681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.428487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.428617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.428643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.428657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.428669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.428699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.438541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.438626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.438653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.438667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.438680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.438718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.448599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.448701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.448727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.448741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.448754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.448784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.458555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.458653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.458679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.458693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.458706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.458734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.468594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.468683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.468708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.468723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.468736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.468764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.478655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.478741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.478767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.478781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.478793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.478823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.488659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.488753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.488778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.488793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.488805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.488834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.498686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.679 [2024-11-20 10:00:57.498817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.679 [2024-11-20 10:00:57.498843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.679 [2024-11-20 10:00:57.498857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.679 [2024-11-20 10:00:57.498869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.679 [2024-11-20 10:00:57.498898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.679 qpair failed and we were unable to recover it. 00:27:20.679 [2024-11-20 10:00:57.508696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.508802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.508827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.508841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.508855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.508884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.680 [2024-11-20 10:00:57.518725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.518824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.518849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.518864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.518877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.518906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.680 [2024-11-20 10:00:57.528798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.528920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.528949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.528972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.528988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.529019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.680 [2024-11-20 10:00:57.538785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.538917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.538943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.538958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.538973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.539002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.680 [2024-11-20 10:00:57.548855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.548963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.548988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.549002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.549016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.549045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.680 [2024-11-20 10:00:57.558843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.558962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.558987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.559002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.559015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.559044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.680 [2024-11-20 10:00:57.568877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.569014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.569039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.569054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.569067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.569102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.680 [2024-11-20 10:00:57.578962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.579049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.579075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.579089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.579102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.579131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.680 [2024-11-20 10:00:57.588978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.680 [2024-11-20 10:00:57.589071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.680 [2024-11-20 10:00:57.589099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.680 [2024-11-20 10:00:57.589116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.680 [2024-11-20 10:00:57.589129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.680 [2024-11-20 10:00:57.589159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.680 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 10:00:57.598998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.938 [2024-11-20 10:00:57.599106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.938 [2024-11-20 10:00:57.599131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.938 [2024-11-20 10:00:57.599146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.938 [2024-11-20 10:00:57.599159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.938 [2024-11-20 10:00:57.599189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 10:00:57.608983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.938 [2024-11-20 10:00:57.609077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.938 [2024-11-20 10:00:57.609103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.938 [2024-11-20 10:00:57.609117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.938 [2024-11-20 10:00:57.609130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.938 [2024-11-20 10:00:57.609159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 10:00:57.619013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.938 [2024-11-20 10:00:57.619102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.938 [2024-11-20 10:00:57.619127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.938 [2024-11-20 10:00:57.619141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.938 [2024-11-20 10:00:57.619154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.938 [2024-11-20 10:00:57.619183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 10:00:57.629063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.938 [2024-11-20 10:00:57.629165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.938 [2024-11-20 10:00:57.629191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.938 [2024-11-20 10:00:57.629205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.938 [2024-11-20 10:00:57.629217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.938 [2024-11-20 10:00:57.629248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.938 qpair failed and we were unable to recover it. 00:27:20.938 [2024-11-20 10:00:57.639042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.938 [2024-11-20 10:00:57.639142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.639168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.639182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.639195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.639224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.649103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.649192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.649218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.649232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.649245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.649276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.659149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.659242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.659268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.659289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.659309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.659342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.669179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.669264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.669290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.669350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.669369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.669401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.679188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.679273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.679299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.679325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.679338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.679368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.689234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.689361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.689388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.689402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.689415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.689446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.699235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.699322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.699348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.699362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.699375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.699410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.709281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.709407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.709433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.709447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.709459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.709489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.719328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.719417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.719443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.719457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.719470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.719500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.729369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.729460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.729485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.729499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.729512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.729542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.739405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.739492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.739518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.739532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.739544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.739573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.749420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.749510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.749537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.749556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.749570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.749600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.759474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.939 [2024-11-20 10:00:57.759562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.939 [2024-11-20 10:00:57.759588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.939 [2024-11-20 10:00:57.759602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.939 [2024-11-20 10:00:57.759614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.939 [2024-11-20 10:00:57.759645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.939 qpair failed and we were unable to recover it. 00:27:20.939 [2024-11-20 10:00:57.769523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.769627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.769652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.769666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.769679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.769708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 10:00:57.779470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.779559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.779585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.779599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.779613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.779642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 10:00:57.789558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.789646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.789676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.789691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.789704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.789733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 10:00:57.799535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.799659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.799685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.799699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.799712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.799741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 10:00:57.809578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.809667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.809693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.809709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.809723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.809752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 10:00:57.819593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.819692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.819717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.819732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.819744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.819773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 10:00:57.829623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.829755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.829782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.829796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.829808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.829842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 10:00:57.839726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.839854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.839879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.839894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.839906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.839935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:20.940 [2024-11-20 10:00:57.849690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.940 [2024-11-20 10:00:57.849780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.940 [2024-11-20 10:00:57.849805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.940 [2024-11-20 10:00:57.849819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.940 [2024-11-20 10:00:57.849833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:20.940 [2024-11-20 10:00:57.849862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:20.940 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.859740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.859824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.859849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.859864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.859877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.859906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.869764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.869846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.869871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.869885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.869898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.869927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.879760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.879847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.879872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.879886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.879900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.879929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.889815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.889927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.889952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.889966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.889978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.890007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.899890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.899989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.900014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.900028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.900042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.900071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.909867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.909947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.909972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.909986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.909999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.910027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.919888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.919975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.920006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.920021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.920034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.920063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.929949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.930040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.930064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.930079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.930091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.930121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.939955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.940035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.940059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.940073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.940086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.199 [2024-11-20 10:00:57.940113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.199 qpair failed and we were unable to recover it. 00:27:21.199 [2024-11-20 10:00:57.949968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.199 [2024-11-20 10:00:57.950095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.199 [2024-11-20 10:00:57.950120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.199 [2024-11-20 10:00:57.950134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.199 [2024-11-20 10:00:57.950149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:57.950178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:57.959988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:57.960074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:57.960103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:57.960119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:57.960132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:57.960169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:57.970050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:57.970168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:57.970194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:57.970208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:57.970220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:57.970250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:57.980044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:57.980131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:57.980156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:57.980170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:57.980185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:57.980214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:57.990108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:57.990192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:57.990218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:57.990232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:57.990244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:57.990274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.000111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.000200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.000226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.000241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.000254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:58.000283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.010185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.010275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.010301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.010323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.010336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:58.010366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.020163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.020273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.020298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.020320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.020334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:58.020363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.030210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.030295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.030333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.030349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.030362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:58.030392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.040224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.040317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.040343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.040357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.040370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:58.040400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.050270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.050392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.050423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.050438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.050453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:58.050483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.060312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.060418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.060444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.060458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.060470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:58.060500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.070346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.070457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.070483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.070498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.070513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x120dfa0 00:27:21.200 [2024-11-20 10:00:58.070542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.200 qpair failed and we were unable to recover it. 00:27:21.200 [2024-11-20 10:00:58.080402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.200 [2024-11-20 10:00:58.080524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.200 [2024-11-20 10:00:58.080558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.200 [2024-11-20 10:00:58.080575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.200 [2024-11-20 10:00:58.080590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a2c000b90 00:27:21.201 [2024-11-20 10:00:58.080623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-11-20 10:00:58.090431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.201 [2024-11-20 10:00:58.090544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.201 [2024-11-20 10:00:58.090571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.201 [2024-11-20 10:00:58.090585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.201 [2024-11-20 10:00:58.090598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a2c000b90 00:27:21.201 [2024-11-20 10:00:58.090636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.201 [2024-11-20 10:00:58.100436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.201 [2024-11-20 10:00:58.100530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.201 [2024-11-20 10:00:58.100562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.201 [2024-11-20 10:00:58.100578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.201 [2024-11-20 10:00:58.100592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a30000b90 00:27:21.201 [2024-11-20 10:00:58.100640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.201 qpair failed and we were unable to recover it. 00:27:21.458 [2024-11-20 10:00:58.110423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.458 [2024-11-20 10:00:58.110510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.458 [2024-11-20 10:00:58.110541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.458 [2024-11-20 10:00:58.110557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.458 [2024-11-20 10:00:58.110571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a30000b90 00:27:21.458 [2024-11-20 10:00:58.110602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.458 qpair failed and we were unable to recover it. 00:27:21.458 [2024-11-20 10:00:58.110714] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:21.458 A controller has encountered a failure and is being reset. 00:27:21.458 [2024-11-20 10:00:58.120514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.458 [2024-11-20 10:00:58.120629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.458 [2024-11-20 10:00:58.120660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.458 [2024-11-20 10:00:58.120676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.458 [2024-11-20 10:00:58.120690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:21.458 [2024-11-20 10:00:58.120722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.458 qpair failed and we were unable to recover it. 00:27:21.458 [2024-11-20 10:00:58.130489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.458 [2024-11-20 10:00:58.130579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.458 [2024-11-20 10:00:58.130606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.458 [2024-11-20 10:00:58.130622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.458 [2024-11-20 10:00:58.130635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a38000b90 00:27:21.458 [2024-11-20 10:00:58.130673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.458 qpair failed and we were unable to recover it. 00:27:21.458 Controller properly reset. 00:27:21.458 Initializing NVMe Controllers 00:27:21.458 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:21.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:21.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:21.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:21.458 Initialization complete. Launching workers. 00:27:21.458 Starting thread on core 1 00:27:21.458 Starting thread on core 2 00:27:21.458 Starting thread on core 3 00:27:21.458 Starting thread on core 0 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:21.458 00:27:21.458 real 0m10.763s 00:27:21.458 user 0m19.352s 00:27:21.458 sys 0m5.218s 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.458 ************************************ 00:27:21.458 END TEST nvmf_target_disconnect_tc2 00:27:21.458 ************************************ 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.458 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.458 rmmod nvme_tcp 00:27:21.458 rmmod nvme_fabrics 00:27:21.459 rmmod nvme_keyring 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3851056 ']' 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3851056 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3851056 ']' 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3851056 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3851056 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3851056' 00:27:21.459 killing process with pid 3851056 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3851056 00:27:21.459 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3851056 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.717 10:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.253 10:01:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:24.253 00:27:24.253 real 0m15.697s 00:27:24.253 user 0m45.729s 00:27:24.253 sys 0m7.259s 00:27:24.253 10:01:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.253 10:01:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.253 ************************************ 00:27:24.253 END TEST nvmf_target_disconnect 00:27:24.253 ************************************ 00:27:24.253 10:01:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:24.253 00:27:24.253 real 5m5.241s 00:27:24.253 user 10m47.401s 00:27:24.253 sys 1m13.836s 00:27:24.253 10:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.253 10:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.253 ************************************ 00:27:24.253 END TEST nvmf_host 00:27:24.253 ************************************ 00:27:24.253 10:01:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:24.253 10:01:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:24.253 10:01:00 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:24.253 10:01:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:24.253 10:01:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.253 10:01:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:24.253 ************************************ 00:27:24.253 START TEST nvmf_target_core_interrupt_mode 00:27:24.253 ************************************ 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:24.253 * Looking for test storage... 00:27:24.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.253 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:24.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.254 --rc genhtml_branch_coverage=1 00:27:24.254 --rc genhtml_function_coverage=1 00:27:24.254 --rc genhtml_legend=1 00:27:24.254 --rc geninfo_all_blocks=1 00:27:24.254 --rc geninfo_unexecuted_blocks=1 00:27:24.254 00:27:24.254 ' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:24.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.254 --rc genhtml_branch_coverage=1 00:27:24.254 --rc genhtml_function_coverage=1 00:27:24.254 --rc genhtml_legend=1 00:27:24.254 --rc geninfo_all_blocks=1 00:27:24.254 --rc geninfo_unexecuted_blocks=1 00:27:24.254 00:27:24.254 ' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:24.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.254 --rc genhtml_branch_coverage=1 00:27:24.254 --rc genhtml_function_coverage=1 00:27:24.254 --rc genhtml_legend=1 00:27:24.254 --rc geninfo_all_blocks=1 00:27:24.254 --rc geninfo_unexecuted_blocks=1 00:27:24.254 00:27:24.254 ' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:24.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.254 --rc genhtml_branch_coverage=1 00:27:24.254 --rc genhtml_function_coverage=1 00:27:24.254 --rc genhtml_legend=1 00:27:24.254 --rc geninfo_all_blocks=1 00:27:24.254 --rc geninfo_unexecuted_blocks=1 00:27:24.254 00:27:24.254 ' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:24.254 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.255 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:24.255 ************************************ 00:27:24.255 START TEST nvmf_abort 00:27:24.255 ************************************ 00:27:24.255 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:24.255 * Looking for test storage... 00:27:24.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:24.255 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:24.255 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:24.255 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:24.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.255 --rc genhtml_branch_coverage=1 00:27:24.255 --rc genhtml_function_coverage=1 00:27:24.255 --rc genhtml_legend=1 00:27:24.255 --rc geninfo_all_blocks=1 00:27:24.255 --rc geninfo_unexecuted_blocks=1 00:27:24.255 00:27:24.255 ' 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:24.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.255 --rc genhtml_branch_coverage=1 00:27:24.255 --rc genhtml_function_coverage=1 00:27:24.255 --rc genhtml_legend=1 00:27:24.255 --rc geninfo_all_blocks=1 00:27:24.255 --rc geninfo_unexecuted_blocks=1 00:27:24.255 00:27:24.255 ' 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:24.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.255 --rc genhtml_branch_coverage=1 00:27:24.255 --rc genhtml_function_coverage=1 00:27:24.255 --rc genhtml_legend=1 00:27:24.255 --rc geninfo_all_blocks=1 00:27:24.255 --rc geninfo_unexecuted_blocks=1 00:27:24.255 00:27:24.255 ' 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:24.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.255 --rc genhtml_branch_coverage=1 00:27:24.255 --rc genhtml_function_coverage=1 00:27:24.255 --rc genhtml_legend=1 00:27:24.255 --rc geninfo_all_blocks=1 00:27:24.255 --rc geninfo_unexecuted_blocks=1 00:27:24.255 00:27:24.255 ' 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.255 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.256 10:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.789 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:26.790 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:26.790 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:26.790 Found net devices under 0000:09:00.0: cvl_0_0 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:26.790 Found net devices under 0000:09:00.1: cvl_0_1 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:26.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:27:26.790 00:27:26.790 --- 10.0.0.2 ping statistics --- 00:27:26.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.790 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:26.790 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:27:26.790 00:27:26.790 --- 10.0.0.1 ping statistics --- 00:27:26.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.791 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3853891 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3853891 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3853891 ']' 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:26.791 [2024-11-20 10:01:03.444874] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:26.791 [2024-11-20 10:01:03.445946] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:27:26.791 [2024-11-20 10:01:03.446002] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.791 [2024-11-20 10:01:03.520372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:26.791 [2024-11-20 10:01:03.577447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.791 [2024-11-20 10:01:03.577499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.791 [2024-11-20 10:01:03.577527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.791 [2024-11-20 10:01:03.577539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.791 [2024-11-20 10:01:03.577549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.791 [2024-11-20 10:01:03.579009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.791 [2024-11-20 10:01:03.579035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.791 [2024-11-20 10:01:03.579041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.791 [2024-11-20 10:01:03.664407] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:26.791 [2024-11-20 10:01:03.664648] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:26.791 [2024-11-20 10:01:03.664670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:26.791 [2024-11-20 10:01:03.664890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:26.791 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 [2024-11-20 10:01:03.711715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 Malloc0 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 Delay0 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.049 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.050 [2024-11-20 10:01:03.779897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.050 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:27.050 [2024-11-20 10:01:03.881198] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:29.581 Initializing NVMe Controllers 00:27:29.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:29.581 controller IO queue size 128 less than required 00:27:29.581 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:29.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:29.581 Initialization complete. Launching workers. 00:27:29.581 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29346 00:27:29.581 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29407, failed to submit 66 00:27:29.581 success 29346, unsuccessful 61, failed 0 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:29.581 10:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:29.581 rmmod nvme_tcp 00:27:29.581 rmmod nvme_fabrics 00:27:29.581 rmmod nvme_keyring 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3853891 ']' 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3853891 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3853891 ']' 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3853891 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3853891 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3853891' 00:27:29.581 killing process with pid 3853891 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3853891 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3853891 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.581 10:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.483 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.483 00:27:31.483 real 0m7.429s 00:27:31.483 user 0m9.317s 00:27:31.483 sys 0m2.994s 00:27:31.483 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.483 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:31.483 ************************************ 00:27:31.483 END TEST nvmf_abort 00:27:31.483 ************************************ 00:27:31.483 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:31.483 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:31.483 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.483 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:31.483 ************************************ 00:27:31.483 START TEST nvmf_ns_hotplug_stress 00:27:31.483 ************************************ 00:27:31.483 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:31.742 * Looking for test storage... 00:27:31.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:31.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.742 --rc genhtml_branch_coverage=1 00:27:31.742 --rc genhtml_function_coverage=1 00:27:31.742 --rc genhtml_legend=1 00:27:31.742 --rc geninfo_all_blocks=1 00:27:31.742 --rc geninfo_unexecuted_blocks=1 00:27:31.742 00:27:31.742 ' 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:31.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.742 --rc genhtml_branch_coverage=1 00:27:31.742 --rc genhtml_function_coverage=1 00:27:31.742 --rc genhtml_legend=1 00:27:31.742 --rc geninfo_all_blocks=1 00:27:31.742 --rc geninfo_unexecuted_blocks=1 00:27:31.742 00:27:31.742 ' 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:31.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.742 --rc genhtml_branch_coverage=1 00:27:31.742 --rc genhtml_function_coverage=1 00:27:31.742 --rc genhtml_legend=1 00:27:31.742 --rc geninfo_all_blocks=1 00:27:31.742 --rc geninfo_unexecuted_blocks=1 00:27:31.742 00:27:31.742 ' 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:31.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.742 --rc genhtml_branch_coverage=1 00:27:31.742 --rc genhtml_function_coverage=1 00:27:31.742 --rc genhtml_legend=1 00:27:31.742 --rc geninfo_all_blocks=1 00:27:31.742 --rc geninfo_unexecuted_blocks=1 00:27:31.742 00:27:31.742 ' 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.742 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.743 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:34.274 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:34.274 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.274 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:34.275 Found net devices under 0000:09:00.0: cvl_0_0 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:34.275 Found net devices under 0000:09:00.1: cvl_0_1 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:34.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:27:34.275 00:27:34.275 --- 10.0.0.2 ping statistics --- 00:27:34.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.275 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:27:34.275 00:27:34.275 --- 10.0.0.1 ping statistics --- 00:27:34.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.275 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3856120 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3856120 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3856120 ']' 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.275 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:34.275 [2024-11-20 10:01:10.838314] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:34.275 [2024-11-20 10:01:10.839382] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:27:34.275 [2024-11-20 10:01:10.839436] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.275 [2024-11-20 10:01:10.910806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.275 [2024-11-20 10:01:10.970198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.275 [2024-11-20 10:01:10.970254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.275 [2024-11-20 10:01:10.970282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.275 [2024-11-20 10:01:10.970293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.275 [2024-11-20 10:01:10.970309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.275 [2024-11-20 10:01:10.971826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.275 [2024-11-20 10:01:10.971887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.275 [2024-11-20 10:01:10.971891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.275 [2024-11-20 10:01:11.069769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:34.275 [2024-11-20 10:01:11.070000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:34.275 [2024-11-20 10:01:11.070010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:34.275 [2024-11-20 10:01:11.070277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:34.275 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.275 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:34.275 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.275 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.275 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:34.276 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.276 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:34.276 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:34.534 [2024-11-20 10:01:11.384578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.534 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:35.100 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.358 [2024-11-20 10:01:12.013026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.358 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:35.616 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:35.874 Malloc0 00:27:35.874 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:36.132 Delay0 00:27:36.132 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:36.389 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:36.953 NULL1 00:27:36.953 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:36.953 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3856535 00:27:36.953 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:36.954 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:36.954 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.324 Read completed with error (sct=0, sc=11) 00:27:38.324 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.582 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:38.582 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:38.839 true 00:27:38.839 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:38.839 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.853 10:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.853 10:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:39.853 10:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:40.110 true 00:27:40.110 10:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:40.110 10:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.367 10:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.625 10:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:40.625 10:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:40.882 true 00:27:40.882 10:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:40.882 10:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.140 10:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.396 10:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:41.396 10:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:41.653 true 00:27:41.653 10:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:41.653 10:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.585 10:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.148 10:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:43.148 10:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:43.148 true 00:27:43.148 10:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:43.149 10:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.712 10:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.712 10:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:43.712 10:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:43.970 true 00:27:43.970 10:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:43.970 10:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.227 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.500 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:44.500 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:44.757 true 00:27:44.757 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:44.757 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.689 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:45.947 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:45.947 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:46.204 true 00:27:46.204 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:46.204 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.769 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.769 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:46.769 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:47.027 true 00:27:47.027 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:47.027 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.960 10:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.218 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:48.218 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:48.476 true 00:27:48.476 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:48.476 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.733 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.990 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:48.990 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:49.248 true 00:27:49.248 10:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:49.248 10:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.182 10:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.439 10:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:50.439 10:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:50.697 true 00:27:50.697 10:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:50.697 10:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.954 10:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.212 10:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:51.212 10:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:51.470 true 00:27:51.470 10:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:51.470 10:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.415 10:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:52.415 10:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:52.415 10:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:52.676 true 00:27:52.676 10:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:52.676 10:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.240 10:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.240 10:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:53.240 10:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:53.498 true 00:27:53.756 10:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:53.756 10:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.014 10:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.271 10:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:54.271 10:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:54.529 true 00:27:54.529 10:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:54.529 10:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.461 10:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.718 10:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:55.718 10:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:55.976 true 00:27:55.976 10:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:55.976 10:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.233 10:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.491 10:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:56.491 10:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:56.748 true 00:27:56.748 10:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:56.748 10:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.006 10:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.264 10:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:57.264 10:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:57.521 true 00:27:57.521 10:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:57.521 10:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.613 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.613 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:58.613 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:58.870 true 00:27:59.128 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:59.128 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.385 10:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.643 10:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:59.643 10:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:59.901 true 00:27:59.901 10:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:27:59.901 10:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.158 10:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.416 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:00.416 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:00.673 true 00:28:00.673 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:28:00.673 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.604 10:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.862 10:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:01.862 10:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:02.119 true 00:28:02.119 10:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:28:02.119 10:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.376 10:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.634 10:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:02.634 10:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:02.891 true 00:28:02.891 10:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:28:02.891 10:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.149 10:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.406 10:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:03.406 10:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:03.662 true 00:28:03.663 10:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:28:03.663 10:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.595 10:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.853 10:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:04.853 10:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:05.111 true 00:28:05.368 10:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:28:05.368 10:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.625 10:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.883 10:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:05.883 10:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:06.140 true 00:28:06.140 10:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:28:06.140 10:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.397 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.655 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:06.655 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:06.912 true 00:28:06.912 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:28:06.912 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.844 Initializing NVMe Controllers 00:28:07.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.844 Controller IO queue size 128, less than required. 00:28:07.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:07.844 Controller IO queue size 128, less than required. 00:28:07.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:07.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:07.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:07.844 Initialization complete. Launching workers. 00:28:07.844 ======================================================== 00:28:07.844 Latency(us) 00:28:07.844 Device Information : IOPS MiB/s Average min max 00:28:07.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 690.93 0.34 82712.41 3329.09 1015460.48 00:28:07.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9003.19 4.40 14218.09 2797.70 537093.51 00:28:07.844 ======================================================== 00:28:07.844 Total : 9694.12 4.73 19099.91 2797.70 1015460.48 00:28:07.844 00:28:07.844 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.102 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:08.102 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:08.360 true 00:28:08.360 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3856535 00:28:08.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3856535) - No such process 00:28:08.360 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3856535 00:28:08.360 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.618 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.876 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:08.876 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:08.876 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:08.876 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:08.876 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:09.134 null0 00:28:09.134 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.134 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.134 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:09.392 null1 00:28:09.392 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.392 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.392 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:09.650 null2 00:28:09.650 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.650 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.650 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:09.908 null3 00:28:09.908 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.908 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.908 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:10.165 null4 00:28:10.165 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.165 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.165 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:10.422 null5 00:28:10.422 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.422 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.422 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:10.680 null6 00:28:10.938 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.938 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.938 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:11.196 null7 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:11.196 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3860551 3860552 3860554 3860555 3860558 3860560 3860562 3860564 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.197 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.454 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.455 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.455 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.455 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.455 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.455 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.455 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.455 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.713 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.972 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.972 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.972 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.972 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.972 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.972 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.972 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.972 10:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.230 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.488 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:12.488 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:12.488 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.488 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:12.488 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.488 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:12.488 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.488 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:12.746 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.746 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.746 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.004 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:13.262 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.262 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:13.262 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:13.262 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:13.262 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:13.262 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:13.262 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:13.262 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.521 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:13.779 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:13.779 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:13.779 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:13.779 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:13.779 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:13.779 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.779 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:13.779 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:14.297 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:14.297 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:14.297 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.297 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.297 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:14.297 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:14.297 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:14.297 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:14.555 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.555 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.555 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:14.555 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.555 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.555 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.814 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:15.072 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:15.072 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:15.072 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:15.072 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:15.072 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:15.072 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:15.072 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:15.072 10:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:15.330 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:15.587 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:15.587 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:15.587 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:15.587 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:15.587 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:15.587 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.587 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:15.587 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.103 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.103 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.103 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:16.103 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.103 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:16.103 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.103 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.103 10:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:16.669 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.669 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.670 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.928 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:16.928 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:16.928 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.928 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.928 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.928 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.186 rmmod nvme_tcp 00:28:17.186 rmmod nvme_fabrics 00:28:17.186 rmmod nvme_keyring 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3856120 ']' 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3856120 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3856120 ']' 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3856120 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.186 10:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3856120 00:28:17.186 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.186 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.186 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3856120' 00:28:17.186 killing process with pid 3856120 00:28:17.186 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3856120 00:28:17.186 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3856120 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.445 10:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:19.982 00:28:19.982 real 0m47.913s 00:28:19.982 user 3m20.138s 00:28:19.982 sys 0m21.796s 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:19.982 ************************************ 00:28:19.982 END TEST nvmf_ns_hotplug_stress 00:28:19.982 ************************************ 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:19.982 ************************************ 00:28:19.982 START TEST nvmf_delete_subsystem 00:28:19.982 ************************************ 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:19.982 * Looking for test storage... 00:28:19.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:19.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.982 --rc genhtml_branch_coverage=1 00:28:19.982 --rc genhtml_function_coverage=1 00:28:19.982 --rc genhtml_legend=1 00:28:19.982 --rc geninfo_all_blocks=1 00:28:19.982 --rc geninfo_unexecuted_blocks=1 00:28:19.982 00:28:19.982 ' 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:19.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.982 --rc genhtml_branch_coverage=1 00:28:19.982 --rc genhtml_function_coverage=1 00:28:19.982 --rc genhtml_legend=1 00:28:19.982 --rc geninfo_all_blocks=1 00:28:19.982 --rc geninfo_unexecuted_blocks=1 00:28:19.982 00:28:19.982 ' 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:19.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.982 --rc genhtml_branch_coverage=1 00:28:19.982 --rc genhtml_function_coverage=1 00:28:19.982 --rc genhtml_legend=1 00:28:19.982 --rc geninfo_all_blocks=1 00:28:19.982 --rc geninfo_unexecuted_blocks=1 00:28:19.982 00:28:19.982 ' 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:19.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.982 --rc genhtml_branch_coverage=1 00:28:19.982 --rc genhtml_function_coverage=1 00:28:19.982 --rc genhtml_legend=1 00:28:19.982 --rc geninfo_all_blocks=1 00:28:19.982 --rc geninfo_unexecuted_blocks=1 00:28:19.982 00:28:19.982 ' 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.982 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.983 10:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.887 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.887 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.887 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:21.888 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:21.888 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:21.888 Found net devices under 0000:09:00.0: cvl_0_0 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:21.888 Found net devices under 0000:09:00.1: cvl_0_1 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.888 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.889 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.889 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.889 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.889 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:28:22.147 00:28:22.147 --- 10.0.0.2 ping statistics --- 00:28:22.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.147 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:28:22.147 00:28:22.147 --- 10.0.0.1 ping statistics --- 00:28:22.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.147 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:22.147 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3863435 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3863435 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3863435 ']' 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.148 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.148 [2024-11-20 10:01:58.893088] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:22.148 [2024-11-20 10:01:58.894150] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:28:22.148 [2024-11-20 10:01:58.894202] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.148 [2024-11-20 10:01:58.964722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:22.148 [2024-11-20 10:01:59.021996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.148 [2024-11-20 10:01:59.022041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.148 [2024-11-20 10:01:59.022069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.148 [2024-11-20 10:01:59.022080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.148 [2024-11-20 10:01:59.022089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.148 [2024-11-20 10:01:59.023403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.148 [2024-11-20 10:01:59.023408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.406 [2024-11-20 10:01:59.111259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:22.406 [2024-11-20 10:01:59.111272] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:22.406 [2024-11-20 10:01:59.111542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.406 [2024-11-20 10:01:59.163993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.406 [2024-11-20 10:01:59.180214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.406 NULL1 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.406 Delay0 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3863462 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:22.406 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:22.406 [2024-11-20 10:01:59.266109] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:24.303 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.303 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.303 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 [2024-11-20 10:02:01.470434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54b2c0 is same with the state(6) to be set 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 starting I/O failed: -6 00:28:24.562 [2024-11-20 10:02:01.471709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0ff400d350 is same with the state(6) to be set 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.562 Read completed with error (sct=0, sc=8) 00:28:24.562 Write completed with error (sct=0, sc=8) 00:28:24.563 Write completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Write completed with error (sct=0, sc=8) 00:28:24.563 Write completed with error (sct=0, sc=8) 00:28:24.563 Write completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Write completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Write completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Write completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Read completed with error (sct=0, sc=8) 00:28:24.563 Write completed with error (sct=0, sc=8) 00:28:25.934 [2024-11-20 10:02:02.445542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54c9a0 is same with the state(6) to be set 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.934 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 [2024-11-20 10:02:02.474349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54b4a0 is same with the state(6) to be set 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 [2024-11-20 10:02:02.475265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0ff400d020 is same with the state(6) to be set 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 [2024-11-20 10:02:02.475415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0ff400d680 is same with the state(6) to be set 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Write completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 Read completed with error (sct=0, sc=8) 00:28:25.935 [2024-11-20 10:02:02.475859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54b860 is same with the state(6) to be set 00:28:25.935 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.935 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:25.935 Initializing NVMe Controllers 00:28:25.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.935 Controller IO queue size 128, less than required. 00:28:25.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:25.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:25.935 Initialization complete. Launching workers. 00:28:25.935 ======================================================== 00:28:25.935 Latency(us) 00:28:25.935 Device Information : IOPS MiB/s Average min max 00:28:25.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.80 0.08 909022.13 470.00 1013460.41 00:28:25.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.39 0.07 941232.90 372.58 1014614.91 00:28:25.935 ======================================================== 00:28:25.935 Total : 315.19 0.15 924493.44 372.58 1014614.91 00:28:25.935 00:28:25.935 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3863462 00:28:25.935 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:25.935 [2024-11-20 10:02:02.476859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54c9a0 (9): Bad file descriptor 00:28:25.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3863462 00:28:26.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3863462) - No such process 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3863462 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3863462 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3863462 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.195 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:26.195 [2024-11-20 10:02:02.996300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3863873 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3863873 00:28:26.195 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:26.195 [2024-11-20 10:02:03.060951] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:26.760 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:26.760 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3863873 00:28:26.760 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.324 10:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.324 10:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3863873 00:28:27.324 10:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.889 10:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:27.889 10:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3863873 00:28:27.889 10:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.146 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.146 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3863873 00:28:28.146 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.710 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.710 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3863873 00:28:28.710 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:29.275 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.275 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3863873 00:28:29.275 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:29.532 Initializing NVMe Controllers 00:28:29.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.532 Controller IO queue size 128, less than required. 00:28:29.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:29.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:29.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:29.532 Initialization complete. Launching workers. 00:28:29.532 ======================================================== 00:28:29.532 Latency(us) 00:28:29.532 Device Information : IOPS MiB/s Average min max 00:28:29.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005574.71 1000228.01 1044095.40 00:28:29.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006430.44 1000232.67 1044411.47 00:28:29.532 ======================================================== 00:28:29.532 Total : 256.00 0.12 1006002.57 1000228.01 1044411.47 00:28:29.532 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3863873 00:28:29.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3863873) - No such process 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3863873 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.791 rmmod nvme_tcp 00:28:29.791 rmmod nvme_fabrics 00:28:29.791 rmmod nvme_keyring 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3863435 ']' 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3863435 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3863435 ']' 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3863435 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3863435 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3863435' 00:28:29.791 killing process with pid 3863435 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3863435 00:28:29.791 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3863435 00:28:30.049 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.049 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.049 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.049 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:30.049 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:30.050 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.050 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.050 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.050 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.050 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.050 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.050 10:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.582 00:28:32.582 real 0m12.552s 00:28:32.582 user 0m24.794s 00:28:32.582 sys 0m3.983s 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:32.582 ************************************ 00:28:32.582 END TEST nvmf_delete_subsystem 00:28:32.582 ************************************ 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:32.582 ************************************ 00:28:32.582 START TEST nvmf_host_management 00:28:32.582 ************************************ 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:32.582 * Looking for test storage... 00:28:32.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:32.582 10:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:32.582 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.583 --rc genhtml_branch_coverage=1 00:28:32.583 --rc genhtml_function_coverage=1 00:28:32.583 --rc genhtml_legend=1 00:28:32.583 --rc geninfo_all_blocks=1 00:28:32.583 --rc geninfo_unexecuted_blocks=1 00:28:32.583 00:28:32.583 ' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.583 --rc genhtml_branch_coverage=1 00:28:32.583 --rc genhtml_function_coverage=1 00:28:32.583 --rc genhtml_legend=1 00:28:32.583 --rc geninfo_all_blocks=1 00:28:32.583 --rc geninfo_unexecuted_blocks=1 00:28:32.583 00:28:32.583 ' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.583 --rc genhtml_branch_coverage=1 00:28:32.583 --rc genhtml_function_coverage=1 00:28:32.583 --rc genhtml_legend=1 00:28:32.583 --rc geninfo_all_blocks=1 00:28:32.583 --rc geninfo_unexecuted_blocks=1 00:28:32.583 00:28:32.583 ' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.583 --rc genhtml_branch_coverage=1 00:28:32.583 --rc genhtml_function_coverage=1 00:28:32.583 --rc genhtml_legend=1 00:28:32.583 --rc geninfo_all_blocks=1 00:28:32.583 --rc geninfo_unexecuted_blocks=1 00:28:32.583 00:28:32.583 ' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.583 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.584 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:32.584 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:32.584 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:32.584 10:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:34.487 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:34.487 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.487 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:34.488 Found net devices under 0000:09:00.0: cvl_0_0 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:34.488 Found net devices under 0000:09:00.1: cvl_0_1 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:28:34.488 00:28:34.488 --- 10.0.0.2 ping statistics --- 00:28:34.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.488 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:28:34.488 00:28:34.488 --- 10.0.0.1 ping statistics --- 00:28:34.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.488 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3866323 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3866323 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3866323 ']' 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.488 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.747 [2024-11-20 10:02:11.404228] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:34.747 [2024-11-20 10:02:11.405351] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:28:34.747 [2024-11-20 10:02:11.405417] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.747 [2024-11-20 10:02:11.478955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.747 [2024-11-20 10:02:11.541513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.747 [2024-11-20 10:02:11.541568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.747 [2024-11-20 10:02:11.541598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.747 [2024-11-20 10:02:11.541610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.747 [2024-11-20 10:02:11.541620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.747 [2024-11-20 10:02:11.543193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.747 [2024-11-20 10:02:11.543257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.747 [2024-11-20 10:02:11.543331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.747 [2024-11-20 10:02:11.543335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.747 [2024-11-20 10:02:11.640345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:34.747 [2024-11-20 10:02:11.640595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:34.747 [2024-11-20 10:02:11.640908] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:34.747 [2024-11-20 10:02:11.641597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:34.747 [2024-11-20 10:02:11.641827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.005 [2024-11-20 10:02:11.692100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.005 Malloc0 00:28:35.005 [2024-11-20 10:02:11.772382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3866367 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3866367 /var/tmp/bdevperf.sock 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3866367 ']' 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:35.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:35.005 { 00:28:35.005 "params": { 00:28:35.005 "name": "Nvme$subsystem", 00:28:35.005 "trtype": "$TEST_TRANSPORT", 00:28:35.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.005 "adrfam": "ipv4", 00:28:35.005 "trsvcid": "$NVMF_PORT", 00:28:35.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.005 "hdgst": ${hdgst:-false}, 00:28:35.005 "ddgst": ${ddgst:-false} 00:28:35.005 }, 00:28:35.005 "method": "bdev_nvme_attach_controller" 00:28:35.005 } 00:28:35.005 EOF 00:28:35.005 )") 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:35.005 10:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:35.005 "params": { 00:28:35.005 "name": "Nvme0", 00:28:35.005 "trtype": "tcp", 00:28:35.005 "traddr": "10.0.0.2", 00:28:35.005 "adrfam": "ipv4", 00:28:35.005 "trsvcid": "4420", 00:28:35.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.005 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:35.005 "hdgst": false, 00:28:35.005 "ddgst": false 00:28:35.005 }, 00:28:35.005 "method": "bdev_nvme_attach_controller" 00:28:35.005 }' 00:28:35.005 [2024-11-20 10:02:11.859030] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:28:35.006 [2024-11-20 10:02:11.859106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866367 ] 00:28:35.263 [2024-11-20 10:02:11.929896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.263 [2024-11-20 10:02:11.989787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.520 Running I/O for 10 seconds... 00:28:35.520 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.520 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:35.520 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:35.520 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.520 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.520 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.520 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:35.520 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:35.521 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:28:35.779 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:35.780 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:35.780 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:35.780 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:35.780 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.780 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.780 [2024-11-20 10:02:12.596251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.596981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.596997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.780 [2024-11-20 10:02:12.597454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.780 [2024-11-20 10:02:12.597470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.597983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.597999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.598324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.781 [2024-11-20 10:02:12.598340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.599539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:35.781 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.781 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:35.781 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.781 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.781 task offset: 74496 on job bdev=Nvme0n1 fails 00:28:35.781 00:28:35.781 Latency(us) 00:28:35.781 [2024-11-20T09:02:12.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.781 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.781 Job: Nvme0n1 ended in about 0.39 seconds with error 00:28:35.781 Verification LBA range: start 0x0 length 0x400 00:28:35.781 Nvme0n1 : 0.39 1472.86 92.05 163.65 0.00 37964.36 2730.67 37671.06 00:28:35.781 [2024-11-20T09:02:12.695Z] =================================================================================================================== 00:28:35.781 [2024-11-20T09:02:12.695Z] Total : 1472.86 92.05 163.65 0.00 37964.36 2730.67 37671.06 00:28:35.781 [2024-11-20 10:02:12.601445] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:35.781 [2024-11-20 10:02:12.601475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbea40 (9): Bad file descriptor 00:28:35.781 [2024-11-20 10:02:12.602581] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:35.781 [2024-11-20 10:02:12.602690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:35.781 [2024-11-20 10:02:12.602718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.781 [2024-11-20 10:02:12.602746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:35.781 [2024-11-20 10:02:12.602764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:35.781 [2024-11-20 10:02:12.602778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.781 [2024-11-20 10:02:12.602791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dbea40 00:28:35.781 [2024-11-20 10:02:12.602826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbea40 (9): Bad file descriptor 00:28:35.781 [2024-11-20 10:02:12.602858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:35.781 [2024-11-20 10:02:12.602874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:35.781 [2024-11-20 10:02:12.602891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:35.782 [2024-11-20 10:02:12.602906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:35.782 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.782 10:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3866367 00:28:36.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3866367) - No such process 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.715 { 00:28:36.715 "params": { 00:28:36.715 "name": "Nvme$subsystem", 00:28:36.715 "trtype": "$TEST_TRANSPORT", 00:28:36.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.715 "adrfam": "ipv4", 00:28:36.715 "trsvcid": "$NVMF_PORT", 00:28:36.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.715 "hdgst": ${hdgst:-false}, 00:28:36.715 "ddgst": ${ddgst:-false} 00:28:36.715 }, 00:28:36.715 "method": "bdev_nvme_attach_controller" 00:28:36.715 } 00:28:36.715 EOF 00:28:36.715 )") 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:36.715 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:36.715 "params": { 00:28:36.715 "name": "Nvme0", 00:28:36.715 "trtype": "tcp", 00:28:36.715 "traddr": "10.0.0.2", 00:28:36.715 "adrfam": "ipv4", 00:28:36.715 "trsvcid": "4420", 00:28:36.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:36.715 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:36.715 "hdgst": false, 00:28:36.715 "ddgst": false 00:28:36.715 }, 00:28:36.715 "method": "bdev_nvme_attach_controller" 00:28:36.715 }' 00:28:36.974 [2024-11-20 10:02:13.659753] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:28:36.974 [2024-11-20 10:02:13.659829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866645 ] 00:28:36.974 [2024-11-20 10:02:13.729398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.974 [2024-11-20 10:02:13.791231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.231 Running I/O for 1 seconds... 00:28:38.605 1664.00 IOPS, 104.00 MiB/s 00:28:38.605 Latency(us) 00:28:38.605 [2024-11-20T09:02:15.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.605 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.605 Verification LBA range: start 0x0 length 0x400 00:28:38.605 Nvme0n1 : 1.01 1716.00 107.25 0.00 0.00 36680.82 4538.97 33010.73 00:28:38.605 [2024-11-20T09:02:15.519Z] =================================================================================================================== 00:28:38.605 [2024-11-20T09:02:15.519Z] Total : 1716.00 107.25 0.00 0.00 36680.82 4538.97 33010.73 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.605 rmmod nvme_tcp 00:28:38.605 rmmod nvme_fabrics 00:28:38.605 rmmod nvme_keyring 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3866323 ']' 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3866323 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3866323 ']' 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3866323 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3866323 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3866323' 00:28:38.605 killing process with pid 3866323 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3866323 00:28:38.605 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3866323 00:28:38.864 [2024-11-20 10:02:15.713946] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.864 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:41.431 00:28:41.431 real 0m8.847s 00:28:41.431 user 0m17.744s 00:28:41.431 sys 0m3.704s 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:41.431 ************************************ 00:28:41.431 END TEST nvmf_host_management 00:28:41.431 ************************************ 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:41.431 ************************************ 00:28:41.431 START TEST nvmf_lvol 00:28:41.431 ************************************ 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:41.431 * Looking for test storage... 00:28:41.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.431 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.432 --rc genhtml_branch_coverage=1 00:28:41.432 --rc genhtml_function_coverage=1 00:28:41.432 --rc genhtml_legend=1 00:28:41.432 --rc geninfo_all_blocks=1 00:28:41.432 --rc geninfo_unexecuted_blocks=1 00:28:41.432 00:28:41.432 ' 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.432 --rc genhtml_branch_coverage=1 00:28:41.432 --rc genhtml_function_coverage=1 00:28:41.432 --rc genhtml_legend=1 00:28:41.432 --rc geninfo_all_blocks=1 00:28:41.432 --rc geninfo_unexecuted_blocks=1 00:28:41.432 00:28:41.432 ' 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.432 --rc genhtml_branch_coverage=1 00:28:41.432 --rc genhtml_function_coverage=1 00:28:41.432 --rc genhtml_legend=1 00:28:41.432 --rc geninfo_all_blocks=1 00:28:41.432 --rc geninfo_unexecuted_blocks=1 00:28:41.432 00:28:41.432 ' 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.432 --rc genhtml_branch_coverage=1 00:28:41.432 --rc genhtml_function_coverage=1 00:28:41.432 --rc genhtml_legend=1 00:28:41.432 --rc geninfo_all_blocks=1 00:28:41.432 --rc geninfo_unexecuted_blocks=1 00:28:41.432 00:28:41.432 ' 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.432 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:41.433 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:41.433 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:43.380 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:43.380 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:43.380 Found net devices under 0000:09:00.0: cvl_0_0 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:43.380 Found net devices under 0000:09:00.1: cvl_0_1 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:28:43.380 00:28:43.380 --- 10.0.0.2 ping statistics --- 00:28:43.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.380 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:43.380 00:28:43.380 --- 10.0.0.1 ping statistics --- 00:28:43.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.380 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:43.380 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3868754 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3868754 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3868754 ']' 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.381 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:43.381 [2024-11-20 10:02:20.244374] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:43.381 [2024-11-20 10:02:20.245641] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:28:43.381 [2024-11-20 10:02:20.245710] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.637 [2024-11-20 10:02:20.325440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.637 [2024-11-20 10:02:20.387406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.637 [2024-11-20 10:02:20.387464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.637 [2024-11-20 10:02:20.387486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.637 [2024-11-20 10:02:20.387498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.637 [2024-11-20 10:02:20.387509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.637 [2024-11-20 10:02:20.389136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.637 [2024-11-20 10:02:20.390325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.637 [2024-11-20 10:02:20.390338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.637 [2024-11-20 10:02:20.493063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:43.637 [2024-11-20 10:02:20.493340] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:43.637 [2024-11-20 10:02:20.493344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:43.637 [2024-11-20 10:02:20.493650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:43.637 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.638 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:43.638 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.638 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.638 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:43.638 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.638 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:43.989 [2024-11-20 10:02:20.782998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.989 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:44.247 10:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:44.247 10:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:44.505 10:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:44.505 10:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:45.070 10:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:45.328 10:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8c7644e1-6161-4b49-886d-8505780ca1a3 00:28:45.328 10:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8c7644e1-6161-4b49-886d-8505780ca1a3 lvol 20 00:28:45.586 10:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=896a43b7-33ea-4895-81ab-bedaf277ad24 00:28:45.586 10:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:45.844 10:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 896a43b7-33ea-4895-81ab-bedaf277ad24 00:28:46.101 10:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:46.359 [2024-11-20 10:02:23.039102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.359 10:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.616 10:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3869152 00:28:46.616 10:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:46.616 10:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:47.550 10:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 896a43b7-33ea-4895-81ab-bedaf277ad24 MY_SNAPSHOT 00:28:47.808 10:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fb2de744-d721-4605-a201-0f9279e7152e 00:28:47.808 10:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 896a43b7-33ea-4895-81ab-bedaf277ad24 30 00:28:48.066 10:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fb2de744-d721-4605-a201-0f9279e7152e MY_CLONE 00:28:48.632 10:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=055585ff-6537-4f58-83c6-5b4828bb4029 00:28:48.632 10:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 055585ff-6537-4f58-83c6-5b4828bb4029 00:28:48.889 10:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3869152 00:28:56.999 Initializing NVMe Controllers 00:28:56.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:56.999 Controller IO queue size 128, less than required. 00:28:56.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:56.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:56.999 Initialization complete. Launching workers. 00:28:56.999 ======================================================== 00:28:56.999 Latency(us) 00:28:56.999 Device Information : IOPS MiB/s Average min max 00:28:56.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10589.20 41.36 12091.49 4792.12 71347.72 00:28:56.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10525.30 41.11 12168.90 4419.10 67249.17 00:28:56.999 ======================================================== 00:28:56.999 Total : 21114.50 82.48 12130.08 4419.10 71347.72 00:28:56.999 00:28:56.999 10:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:57.257 10:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 896a43b7-33ea-4895-81ab-bedaf277ad24 00:28:57.515 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c7644e1-6161-4b49-886d-8505780ca1a3 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.773 rmmod nvme_tcp 00:28:57.773 rmmod nvme_fabrics 00:28:57.773 rmmod nvme_keyring 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3868754 ']' 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3868754 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3868754 ']' 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3868754 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.773 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3868754 00:28:57.774 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.774 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.774 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3868754' 00:28:57.774 killing process with pid 3868754 00:28:57.774 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3868754 00:28:57.774 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3868754 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.032 10:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.566 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.566 00:29:00.566 real 0m19.086s 00:29:00.566 user 0m56.312s 00:29:00.566 sys 0m7.762s 00:29:00.566 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.566 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:00.566 ************************************ 00:29:00.566 END TEST nvmf_lvol 00:29:00.566 ************************************ 00:29:00.566 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:00.566 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:00.566 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.566 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:00.566 ************************************ 00:29:00.566 START TEST nvmf_lvs_grow 00:29:00.566 ************************************ 00:29:00.566 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:00.566 * Looking for test storage... 00:29:00.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.566 --rc genhtml_branch_coverage=1 00:29:00.566 --rc genhtml_function_coverage=1 00:29:00.566 --rc genhtml_legend=1 00:29:00.566 --rc geninfo_all_blocks=1 00:29:00.566 --rc geninfo_unexecuted_blocks=1 00:29:00.566 00:29:00.566 ' 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.566 --rc genhtml_branch_coverage=1 00:29:00.566 --rc genhtml_function_coverage=1 00:29:00.566 --rc genhtml_legend=1 00:29:00.566 --rc geninfo_all_blocks=1 00:29:00.566 --rc geninfo_unexecuted_blocks=1 00:29:00.566 00:29:00.566 ' 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.566 --rc genhtml_branch_coverage=1 00:29:00.566 --rc genhtml_function_coverage=1 00:29:00.566 --rc genhtml_legend=1 00:29:00.566 --rc geninfo_all_blocks=1 00:29:00.566 --rc geninfo_unexecuted_blocks=1 00:29:00.566 00:29:00.566 ' 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.566 --rc genhtml_branch_coverage=1 00:29:00.566 --rc genhtml_function_coverage=1 00:29:00.566 --rc genhtml_legend=1 00:29:00.566 --rc geninfo_all_blocks=1 00:29:00.566 --rc geninfo_unexecuted_blocks=1 00:29:00.566 00:29:00.566 ' 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.566 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.567 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.478 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.478 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:02.478 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:02.479 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:02.479 Found net devices under 0000:09:00.0: cvl_0_0 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:02.479 Found net devices under 0000:09:00.1: cvl_0_1 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:29:02.479 00:29:02.479 --- 10.0.0.2 ping statistics --- 00:29:02.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.479 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:02.479 00:29:02.479 --- 10.0.0.1 ping statistics --- 00:29:02.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.479 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:02.479 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3872410 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3872410 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3872410 ']' 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.480 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:02.480 [2024-11-20 10:02:39.229242] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:02.480 [2024-11-20 10:02:39.230317] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:02.480 [2024-11-20 10:02:39.230373] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.480 [2024-11-20 10:02:39.302835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.480 [2024-11-20 10:02:39.358447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.480 [2024-11-20 10:02:39.358502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.480 [2024-11-20 10:02:39.358530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.480 [2024-11-20 10:02:39.358541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.480 [2024-11-20 10:02:39.358552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.480 [2024-11-20 10:02:39.359144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.738 [2024-11-20 10:02:39.447513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:02.738 [2024-11-20 10:02:39.447857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:02.738 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.738 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:02.738 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.738 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.738 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:02.738 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.738 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:02.996 [2024-11-20 10:02:39.763803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.996 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:02.996 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:02.996 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.996 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:02.996 ************************************ 00:29:02.996 START TEST lvs_grow_clean 00:29:02.996 ************************************ 00:29:02.996 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:02.996 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:02.996 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:02.996 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:02.997 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:02.997 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:02.997 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:02.997 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:02.997 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:02.997 10:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:03.254 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:03.254 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:03.512 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:03.512 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:03.512 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:03.770 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:03.770 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:04.029 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 lvol 150 00:29:04.287 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5d0c4b90-6734-4178-8e7e-7de3cf561dd8 00:29:04.287 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:04.287 10:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:04.545 [2024-11-20 10:02:41.223637] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:04.545 [2024-11-20 10:02:41.223737] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:04.545 true 00:29:04.545 10:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:04.545 10:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:04.804 10:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:04.804 10:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:05.062 10:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5d0c4b90-6734-4178-8e7e-7de3cf561dd8 00:29:05.320 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:05.578 [2024-11-20 10:02:42.319934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.578 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3872845 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3872845 /var/tmp/bdevperf.sock 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3872845 ']' 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:05.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:05.836 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:05.836 [2024-11-20 10:02:42.668453] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:05.836 [2024-11-20 10:02:42.668546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872845 ] 00:29:05.837 [2024-11-20 10:02:42.745335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.094 [2024-11-20 10:02:42.808797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.094 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.094 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:06.094 10:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:06.660 Nvme0n1 00:29:06.660 10:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:06.919 [ 00:29:06.919 { 00:29:06.919 "name": "Nvme0n1", 00:29:06.919 "aliases": [ 00:29:06.919 "5d0c4b90-6734-4178-8e7e-7de3cf561dd8" 00:29:06.919 ], 00:29:06.919 "product_name": "NVMe disk", 00:29:06.919 "block_size": 4096, 00:29:06.919 "num_blocks": 38912, 00:29:06.919 "uuid": "5d0c4b90-6734-4178-8e7e-7de3cf561dd8", 00:29:06.919 "numa_id": 0, 00:29:06.919 "assigned_rate_limits": { 00:29:06.919 "rw_ios_per_sec": 0, 00:29:06.919 "rw_mbytes_per_sec": 0, 00:29:06.919 "r_mbytes_per_sec": 0, 00:29:06.919 "w_mbytes_per_sec": 0 00:29:06.919 }, 00:29:06.919 "claimed": false, 00:29:06.919 "zoned": false, 00:29:06.919 "supported_io_types": { 00:29:06.919 "read": true, 00:29:06.919 "write": true, 00:29:06.919 "unmap": true, 00:29:06.919 "flush": true, 00:29:06.919 "reset": true, 00:29:06.919 "nvme_admin": true, 00:29:06.919 "nvme_io": true, 00:29:06.919 "nvme_io_md": false, 00:29:06.919 "write_zeroes": true, 00:29:06.919 "zcopy": false, 00:29:06.919 "get_zone_info": false, 00:29:06.919 "zone_management": false, 00:29:06.919 "zone_append": false, 00:29:06.919 "compare": true, 00:29:06.919 "compare_and_write": true, 00:29:06.919 "abort": true, 00:29:06.919 "seek_hole": false, 00:29:06.919 "seek_data": false, 00:29:06.919 "copy": true, 00:29:06.919 "nvme_iov_md": false 00:29:06.919 }, 00:29:06.919 "memory_domains": [ 00:29:06.919 { 00:29:06.919 "dma_device_id": "system", 00:29:06.919 "dma_device_type": 1 00:29:06.919 } 00:29:06.919 ], 00:29:06.919 "driver_specific": { 00:29:06.919 "nvme": [ 00:29:06.919 { 00:29:06.919 "trid": { 00:29:06.919 "trtype": "TCP", 00:29:06.919 "adrfam": "IPv4", 00:29:06.919 "traddr": "10.0.0.2", 00:29:06.919 "trsvcid": "4420", 00:29:06.919 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:06.919 }, 00:29:06.919 "ctrlr_data": { 00:29:06.919 "cntlid": 1, 00:29:06.919 "vendor_id": "0x8086", 00:29:06.919 "model_number": "SPDK bdev Controller", 00:29:06.919 "serial_number": "SPDK0", 00:29:06.919 "firmware_revision": "25.01", 00:29:06.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.919 "oacs": { 00:29:06.919 "security": 0, 00:29:06.919 "format": 0, 00:29:06.919 "firmware": 0, 00:29:06.919 "ns_manage": 0 00:29:06.919 }, 00:29:06.919 "multi_ctrlr": true, 00:29:06.919 "ana_reporting": false 00:29:06.919 }, 00:29:06.919 "vs": { 00:29:06.919 "nvme_version": "1.3" 00:29:06.919 }, 00:29:06.919 "ns_data": { 00:29:06.919 "id": 1, 00:29:06.919 "can_share": true 00:29:06.919 } 00:29:06.919 } 00:29:06.919 ], 00:29:06.919 "mp_policy": "active_passive" 00:29:06.919 } 00:29:06.919 } 00:29:06.919 ] 00:29:06.919 10:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3872979 00:29:06.919 10:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:06.919 10:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:07.177 Running I/O for 10 seconds... 00:29:08.111 Latency(us) 00:29:08.111 [2024-11-20T09:02:45.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.111 Nvme0n1 : 1.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:29:08.111 [2024-11-20T09:02:45.025Z] =================================================================================================================== 00:29:08.111 [2024-11-20T09:02:45.025Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:29:08.111 00:29:09.046 10:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:09.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.046 Nvme0n1 : 2.00 14827.50 57.92 0.00 0.00 0.00 0.00 0.00 00:29:09.046 [2024-11-20T09:02:45.960Z] =================================================================================================================== 00:29:09.046 [2024-11-20T09:02:45.960Z] Total : 14827.50 57.92 0.00 0.00 0.00 0.00 0.00 00:29:09.046 00:29:09.304 true 00:29:09.304 10:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:09.304 10:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:09.562 10:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:09.562 10:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:09.562 10:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3872979 00:29:10.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.128 Nvme0n1 : 3.00 14976.33 58.50 0.00 0.00 0.00 0.00 0.00 00:29:10.128 [2024-11-20T09:02:47.042Z] =================================================================================================================== 00:29:10.128 [2024-11-20T09:02:47.042Z] Total : 14976.33 58.50 0.00 0.00 0.00 0.00 0.00 00:29:10.128 00:29:11.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.061 Nvme0n1 : 4.00 15089.75 58.94 0.00 0.00 0.00 0.00 0.00 00:29:11.061 [2024-11-20T09:02:47.975Z] =================================================================================================================== 00:29:11.061 [2024-11-20T09:02:47.975Z] Total : 15089.75 58.94 0.00 0.00 0.00 0.00 0.00 00:29:11.061 00:29:11.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.996 Nvme0n1 : 5.00 15196.00 59.36 0.00 0.00 0.00 0.00 0.00 00:29:11.996 [2024-11-20T09:02:48.910Z] =================================================================================================================== 00:29:11.996 [2024-11-20T09:02:48.910Z] Total : 15196.00 59.36 0.00 0.00 0.00 0.00 0.00 00:29:11.996 00:29:13.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.397 Nvme0n1 : 6.00 15245.67 59.55 0.00 0.00 0.00 0.00 0.00 00:29:13.397 [2024-11-20T09:02:50.311Z] =================================================================================================================== 00:29:13.397 [2024-11-20T09:02:50.311Z] Total : 15245.67 59.55 0.00 0.00 0.00 0.00 0.00 00:29:13.397 00:29:13.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.964 Nvme0n1 : 7.00 15263.00 59.62 0.00 0.00 0.00 0.00 0.00 00:29:13.964 [2024-11-20T09:02:50.878Z] =================================================================================================================== 00:29:13.964 [2024-11-20T09:02:50.878Z] Total : 15263.00 59.62 0.00 0.00 0.00 0.00 0.00 00:29:13.964 00:29:15.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.337 Nvme0n1 : 8.00 15307.75 59.80 0.00 0.00 0.00 0.00 0.00 00:29:15.337 [2024-11-20T09:02:52.251Z] =================================================================================================================== 00:29:15.337 [2024-11-20T09:02:52.251Z] Total : 15307.75 59.80 0.00 0.00 0.00 0.00 0.00 00:29:15.337 00:29:16.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.270 Nvme0n1 : 9.00 15349.67 59.96 0.00 0.00 0.00 0.00 0.00 00:29:16.270 [2024-11-20T09:02:53.184Z] =================================================================================================================== 00:29:16.270 [2024-11-20T09:02:53.184Z] Total : 15349.67 59.96 0.00 0.00 0.00 0.00 0.00 00:29:16.270 00:29:17.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.203 Nvme0n1 : 10.00 15386.50 60.10 0.00 0.00 0.00 0.00 0.00 00:29:17.203 [2024-11-20T09:02:54.117Z] =================================================================================================================== 00:29:17.203 [2024-11-20T09:02:54.117Z] Total : 15386.50 60.10 0.00 0.00 0.00 0.00 0.00 00:29:17.203 00:29:17.203 00:29:17.203 Latency(us) 00:29:17.203 [2024-11-20T09:02:54.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.203 Nvme0n1 : 10.01 15384.59 60.10 0.00 0.00 8314.46 4369.07 18447.17 00:29:17.203 [2024-11-20T09:02:54.117Z] =================================================================================================================== 00:29:17.203 [2024-11-20T09:02:54.117Z] Total : 15384.59 60.10 0.00 0.00 8314.46 4369.07 18447.17 00:29:17.203 { 00:29:17.203 "results": [ 00:29:17.203 { 00:29:17.203 "job": "Nvme0n1", 00:29:17.203 "core_mask": "0x2", 00:29:17.203 "workload": "randwrite", 00:29:17.203 "status": "finished", 00:29:17.203 "queue_depth": 128, 00:29:17.203 "io_size": 4096, 00:29:17.203 "runtime": 10.005466, 00:29:17.203 "iops": 15384.590782678188, 00:29:17.203 "mibps": 60.09605774483667, 00:29:17.203 "io_failed": 0, 00:29:17.203 "io_timeout": 0, 00:29:17.203 "avg_latency_us": 8314.455726340257, 00:29:17.203 "min_latency_us": 4369.066666666667, 00:29:17.203 "max_latency_us": 18447.17037037037 00:29:17.203 } 00:29:17.203 ], 00:29:17.203 "core_count": 1 00:29:17.203 } 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3872845 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3872845 ']' 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3872845 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3872845 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3872845' 00:29:17.203 killing process with pid 3872845 00:29:17.203 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3872845 00:29:17.203 Received shutdown signal, test time was about 10.000000 seconds 00:29:17.203 00:29:17.204 Latency(us) 00:29:17.204 [2024-11-20T09:02:54.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.204 [2024-11-20T09:02:54.118Z] =================================================================================================================== 00:29:17.204 [2024-11-20T09:02:54.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.204 10:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3872845 00:29:17.462 10:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:17.720 10:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:17.979 10:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:17.979 10:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:18.237 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:18.237 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:18.237 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:18.496 [2024-11-20 10:02:55.283698] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:18.496 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:18.754 request: 00:29:18.754 { 00:29:18.754 "uuid": "1d03fd99-7129-49a6-80dc-b9f05dc825b1", 00:29:18.754 "method": "bdev_lvol_get_lvstores", 00:29:18.754 "req_id": 1 00:29:18.754 } 00:29:18.754 Got JSON-RPC error response 00:29:18.754 response: 00:29:18.754 { 00:29:18.754 "code": -19, 00:29:18.754 "message": "No such device" 00:29:18.754 } 00:29:18.754 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:18.754 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.754 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:18.755 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.755 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:19.013 aio_bdev 00:29:19.013 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5d0c4b90-6734-4178-8e7e-7de3cf561dd8 00:29:19.013 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=5d0c4b90-6734-4178-8e7e-7de3cf561dd8 00:29:19.013 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:19.013 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:19.013 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:19.013 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:19.013 10:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:19.271 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5d0c4b90-6734-4178-8e7e-7de3cf561dd8 -t 2000 00:29:19.531 [ 00:29:19.531 { 00:29:19.531 "name": "5d0c4b90-6734-4178-8e7e-7de3cf561dd8", 00:29:19.531 "aliases": [ 00:29:19.531 "lvs/lvol" 00:29:19.531 ], 00:29:19.531 "product_name": "Logical Volume", 00:29:19.531 "block_size": 4096, 00:29:19.531 "num_blocks": 38912, 00:29:19.531 "uuid": "5d0c4b90-6734-4178-8e7e-7de3cf561dd8", 00:29:19.531 "assigned_rate_limits": { 00:29:19.531 "rw_ios_per_sec": 0, 00:29:19.531 "rw_mbytes_per_sec": 0, 00:29:19.531 "r_mbytes_per_sec": 0, 00:29:19.531 "w_mbytes_per_sec": 0 00:29:19.531 }, 00:29:19.531 "claimed": false, 00:29:19.531 "zoned": false, 00:29:19.531 "supported_io_types": { 00:29:19.531 "read": true, 00:29:19.531 "write": true, 00:29:19.531 "unmap": true, 00:29:19.531 "flush": false, 00:29:19.531 "reset": true, 00:29:19.531 "nvme_admin": false, 00:29:19.531 "nvme_io": false, 00:29:19.531 "nvme_io_md": false, 00:29:19.531 "write_zeroes": true, 00:29:19.531 "zcopy": false, 00:29:19.531 "get_zone_info": false, 00:29:19.531 "zone_management": false, 00:29:19.531 "zone_append": false, 00:29:19.531 "compare": false, 00:29:19.531 "compare_and_write": false, 00:29:19.531 "abort": false, 00:29:19.531 "seek_hole": true, 00:29:19.531 "seek_data": true, 00:29:19.531 "copy": false, 00:29:19.531 "nvme_iov_md": false 00:29:19.531 }, 00:29:19.531 "driver_specific": { 00:29:19.531 "lvol": { 00:29:19.531 "lvol_store_uuid": "1d03fd99-7129-49a6-80dc-b9f05dc825b1", 00:29:19.531 "base_bdev": "aio_bdev", 00:29:19.531 "thin_provision": false, 00:29:19.531 "num_allocated_clusters": 38, 00:29:19.531 "snapshot": false, 00:29:19.531 "clone": false, 00:29:19.531 "esnap_clone": false 00:29:19.531 } 00:29:19.531 } 00:29:19.531 } 00:29:19.531 ] 00:29:19.531 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:19.531 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:19.531 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:19.790 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:19.790 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:19.790 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:20.356 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:20.356 10:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5d0c4b90-6734-4178-8e7e-7de3cf561dd8 00:29:20.356 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1d03fd99-7129-49a6-80dc-b9f05dc825b1 00:29:20.923 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.181 00:29:21.181 real 0m18.105s 00:29:21.181 user 0m17.682s 00:29:21.181 sys 0m1.952s 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:21.181 ************************************ 00:29:21.181 END TEST lvs_grow_clean 00:29:21.181 ************************************ 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.181 ************************************ 00:29:21.181 START TEST lvs_grow_dirty 00:29:21.181 ************************************ 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.181 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:21.439 10:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:21.439 10:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:21.697 10:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:21.697 10:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:21.697 10:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:21.956 10:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:21.956 10:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:21.956 10:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 lvol 150 00:29:22.214 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b58f338a-ce76-4870-ac54-4c2eab82eb66 00:29:22.214 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:22.214 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:22.472 [2024-11-20 10:02:59.343699] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:22.472 [2024-11-20 10:02:59.343799] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:22.472 true 00:29:22.472 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:22.472 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:22.730 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:22.730 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:23.296 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b58f338a-ce76-4870-ac54-4c2eab82eb66 00:29:23.296 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:23.863 [2024-11-20 10:03:00.467986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3875061 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3875061 /var/tmp/bdevperf.sock 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3875061 ']' 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.863 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:24.121 [2024-11-20 10:03:00.817545] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:24.121 [2024-11-20 10:03:00.817637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875061 ] 00:29:24.121 [2024-11-20 10:03:00.883389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.121 [2024-11-20 10:03:00.941132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.380 10:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.380 10:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:24.380 10:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:24.638 Nvme0n1 00:29:24.638 10:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:24.896 [ 00:29:24.896 { 00:29:24.896 "name": "Nvme0n1", 00:29:24.896 "aliases": [ 00:29:24.896 "b58f338a-ce76-4870-ac54-4c2eab82eb66" 00:29:24.896 ], 00:29:24.896 "product_name": "NVMe disk", 00:29:24.896 "block_size": 4096, 00:29:24.896 "num_blocks": 38912, 00:29:24.896 "uuid": "b58f338a-ce76-4870-ac54-4c2eab82eb66", 00:29:24.896 "numa_id": 0, 00:29:24.896 "assigned_rate_limits": { 00:29:24.896 "rw_ios_per_sec": 0, 00:29:24.896 "rw_mbytes_per_sec": 0, 00:29:24.896 "r_mbytes_per_sec": 0, 00:29:24.896 "w_mbytes_per_sec": 0 00:29:24.896 }, 00:29:24.896 "claimed": false, 00:29:24.896 "zoned": false, 00:29:24.896 "supported_io_types": { 00:29:24.896 "read": true, 00:29:24.896 "write": true, 00:29:24.896 "unmap": true, 00:29:24.896 "flush": true, 00:29:24.896 "reset": true, 00:29:24.896 "nvme_admin": true, 00:29:24.896 "nvme_io": true, 00:29:24.896 "nvme_io_md": false, 00:29:24.896 "write_zeroes": true, 00:29:24.896 "zcopy": false, 00:29:24.896 "get_zone_info": false, 00:29:24.896 "zone_management": false, 00:29:24.896 "zone_append": false, 00:29:24.896 "compare": true, 00:29:24.896 "compare_and_write": true, 00:29:24.896 "abort": true, 00:29:24.896 "seek_hole": false, 00:29:24.896 "seek_data": false, 00:29:24.896 "copy": true, 00:29:24.896 "nvme_iov_md": false 00:29:24.896 }, 00:29:24.896 "memory_domains": [ 00:29:24.896 { 00:29:24.896 "dma_device_id": "system", 00:29:24.896 "dma_device_type": 1 00:29:24.896 } 00:29:24.896 ], 00:29:24.896 "driver_specific": { 00:29:24.896 "nvme": [ 00:29:24.896 { 00:29:24.896 "trid": { 00:29:24.896 "trtype": "TCP", 00:29:24.896 "adrfam": "IPv4", 00:29:24.896 "traddr": "10.0.0.2", 00:29:24.896 "trsvcid": "4420", 00:29:24.896 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:24.896 }, 00:29:24.896 "ctrlr_data": { 00:29:24.896 "cntlid": 1, 00:29:24.896 "vendor_id": "0x8086", 00:29:24.896 "model_number": "SPDK bdev Controller", 00:29:24.896 "serial_number": "SPDK0", 00:29:24.897 "firmware_revision": "25.01", 00:29:24.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.897 "oacs": { 00:29:24.897 "security": 0, 00:29:24.897 "format": 0, 00:29:24.897 "firmware": 0, 00:29:24.897 "ns_manage": 0 00:29:24.897 }, 00:29:24.897 "multi_ctrlr": true, 00:29:24.897 "ana_reporting": false 00:29:24.897 }, 00:29:24.897 "vs": { 00:29:24.897 "nvme_version": "1.3" 00:29:24.897 }, 00:29:24.897 "ns_data": { 00:29:24.897 "id": 1, 00:29:24.897 "can_share": true 00:29:24.897 } 00:29:24.897 } 00:29:24.897 ], 00:29:24.897 "mp_policy": "active_passive" 00:29:24.897 } 00:29:24.897 } 00:29:24.897 ] 00:29:24.897 10:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3875240 00:29:24.897 10:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:24.897 10:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:24.897 Running I/O for 10 seconds... 00:29:26.271 Latency(us) 00:29:26.271 [2024-11-20T09:03:03.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.271 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:29:26.271 [2024-11-20T09:03:03.185Z] =================================================================================================================== 00:29:26.271 [2024-11-20T09:03:03.185Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:29:26.271 00:29:26.837 10:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:27.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.095 Nvme0n1 : 2.00 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:29:27.095 [2024-11-20T09:03:04.009Z] =================================================================================================================== 00:29:27.095 [2024-11-20T09:03:04.009Z] Total : 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:29:27.095 00:29:27.095 true 00:29:27.095 10:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:27.095 10:03:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:27.353 10:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:27.353 10:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:27.353 10:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3875240 00:29:27.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.919 Nvme0n1 : 3.00 15082.00 58.91 0.00 0.00 0.00 0.00 0.00 00:29:27.919 [2024-11-20T09:03:04.833Z] =================================================================================================================== 00:29:27.919 [2024-11-20T09:03:04.833Z] Total : 15082.00 58.91 0.00 0.00 0.00 0.00 0.00 00:29:27.919 00:29:29.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.290 Nvme0n1 : 4.00 15089.75 58.94 0.00 0.00 0.00 0.00 0.00 00:29:29.290 [2024-11-20T09:03:06.204Z] =================================================================================================================== 00:29:29.290 [2024-11-20T09:03:06.204Z] Total : 15089.75 58.94 0.00 0.00 0.00 0.00 0.00 00:29:29.290 00:29:30.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.224 Nvme0n1 : 5.00 15196.00 59.36 0.00 0.00 0.00 0.00 0.00 00:29:30.224 [2024-11-20T09:03:07.138Z] =================================================================================================================== 00:29:30.224 [2024-11-20T09:03:07.138Z] Total : 15196.00 59.36 0.00 0.00 0.00 0.00 0.00 00:29:30.224 00:29:31.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.158 Nvme0n1 : 6.00 15224.50 59.47 0.00 0.00 0.00 0.00 0.00 00:29:31.158 [2024-11-20T09:03:08.072Z] =================================================================================================================== 00:29:31.158 [2024-11-20T09:03:08.072Z] Total : 15224.50 59.47 0.00 0.00 0.00 0.00 0.00 00:29:31.158 00:29:32.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.092 Nvme0n1 : 7.00 15267.86 59.64 0.00 0.00 0.00 0.00 0.00 00:29:32.092 [2024-11-20T09:03:09.006Z] =================================================================================================================== 00:29:32.092 [2024-11-20T09:03:09.006Z] Total : 15267.86 59.64 0.00 0.00 0.00 0.00 0.00 00:29:32.092 00:29:33.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.024 Nvme0n1 : 8.00 15304.12 59.78 0.00 0.00 0.00 0.00 0.00 00:29:33.024 [2024-11-20T09:03:09.938Z] =================================================================================================================== 00:29:33.024 [2024-11-20T09:03:09.938Z] Total : 15304.12 59.78 0.00 0.00 0.00 0.00 0.00 00:29:33.024 00:29:33.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.956 Nvme0n1 : 9.00 15318.11 59.84 0.00 0.00 0.00 0.00 0.00 00:29:33.956 [2024-11-20T09:03:10.870Z] =================================================================================================================== 00:29:33.956 [2024-11-20T09:03:10.870Z] Total : 15318.11 59.84 0.00 0.00 0.00 0.00 0.00 00:29:33.956 00:29:34.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.889 Nvme0n1 : 10.00 15323.00 59.86 0.00 0.00 0.00 0.00 0.00 00:29:34.889 [2024-11-20T09:03:11.803Z] =================================================================================================================== 00:29:34.889 [2024-11-20T09:03:11.803Z] Total : 15323.00 59.86 0.00 0.00 0.00 0.00 0.00 00:29:34.889 00:29:35.147 00:29:35.147 Latency(us) 00:29:35.147 [2024-11-20T09:03:12.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.147 Nvme0n1 : 10.01 15327.24 59.87 0.00 0.00 8346.87 4320.52 18155.90 00:29:35.147 [2024-11-20T09:03:12.061Z] =================================================================================================================== 00:29:35.147 [2024-11-20T09:03:12.061Z] Total : 15327.24 59.87 0.00 0.00 8346.87 4320.52 18155.90 00:29:35.147 { 00:29:35.147 "results": [ 00:29:35.147 { 00:29:35.147 "job": "Nvme0n1", 00:29:35.147 "core_mask": "0x2", 00:29:35.147 "workload": "randwrite", 00:29:35.147 "status": "finished", 00:29:35.147 "queue_depth": 128, 00:29:35.147 "io_size": 4096, 00:29:35.147 "runtime": 10.005583, 00:29:35.147 "iops": 15327.242800344568, 00:29:35.147 "mibps": 59.87204218884597, 00:29:35.147 "io_failed": 0, 00:29:35.147 "io_timeout": 0, 00:29:35.147 "avg_latency_us": 8346.867997756885, 00:29:35.147 "min_latency_us": 4320.521481481482, 00:29:35.147 "max_latency_us": 18155.89925925926 00:29:35.147 } 00:29:35.147 ], 00:29:35.147 "core_count": 1 00:29:35.147 } 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3875061 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3875061 ']' 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3875061 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3875061 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3875061' 00:29:35.147 killing process with pid 3875061 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3875061 00:29:35.147 Received shutdown signal, test time was about 10.000000 seconds 00:29:35.147 00:29:35.147 Latency(us) 00:29:35.147 [2024-11-20T09:03:12.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.147 [2024-11-20T09:03:12.061Z] =================================================================================================================== 00:29:35.147 [2024-11-20T09:03:12.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.147 10:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3875061 00:29:35.405 10:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.663 10:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.922 10:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:35.922 10:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3872410 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3872410 00:29:36.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3872410 Killed "${NVMF_APP[@]}" "$@" 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3877075 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3877075 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3877075 ']' 00:29:36.181 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.182 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.182 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.182 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.182 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:36.440 [2024-11-20 10:03:13.095086] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:36.440 [2024-11-20 10:03:13.096257] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:36.440 [2024-11-20 10:03:13.096329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.440 [2024-11-20 10:03:13.168683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.440 [2024-11-20 10:03:13.226136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.440 [2024-11-20 10:03:13.226195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.440 [2024-11-20 10:03:13.226223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.440 [2024-11-20 10:03:13.226233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.440 [2024-11-20 10:03:13.226242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.440 [2024-11-20 10:03:13.226868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.440 [2024-11-20 10:03:13.313791] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:36.440 [2024-11-20 10:03:13.314083] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:36.440 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.440 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:36.440 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.440 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.440 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:36.698 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.698 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:36.957 [2024-11-20 10:03:13.621526] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:36.957 [2024-11-20 10:03:13.621655] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:36.957 [2024-11-20 10:03:13.621702] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:36.957 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:36.957 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b58f338a-ce76-4870-ac54-4c2eab82eb66 00:29:36.957 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b58f338a-ce76-4870-ac54-4c2eab82eb66 00:29:36.957 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:36.957 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:36.957 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:36.957 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:36.957 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:37.215 10:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b58f338a-ce76-4870-ac54-4c2eab82eb66 -t 2000 00:29:37.473 [ 00:29:37.474 { 00:29:37.474 "name": "b58f338a-ce76-4870-ac54-4c2eab82eb66", 00:29:37.474 "aliases": [ 00:29:37.474 "lvs/lvol" 00:29:37.474 ], 00:29:37.474 "product_name": "Logical Volume", 00:29:37.474 "block_size": 4096, 00:29:37.474 "num_blocks": 38912, 00:29:37.474 "uuid": "b58f338a-ce76-4870-ac54-4c2eab82eb66", 00:29:37.474 "assigned_rate_limits": { 00:29:37.474 "rw_ios_per_sec": 0, 00:29:37.474 "rw_mbytes_per_sec": 0, 00:29:37.474 "r_mbytes_per_sec": 0, 00:29:37.474 "w_mbytes_per_sec": 0 00:29:37.474 }, 00:29:37.474 "claimed": false, 00:29:37.474 "zoned": false, 00:29:37.474 "supported_io_types": { 00:29:37.474 "read": true, 00:29:37.474 "write": true, 00:29:37.474 "unmap": true, 00:29:37.474 "flush": false, 00:29:37.474 "reset": true, 00:29:37.474 "nvme_admin": false, 00:29:37.474 "nvme_io": false, 00:29:37.474 "nvme_io_md": false, 00:29:37.474 "write_zeroes": true, 00:29:37.474 "zcopy": false, 00:29:37.474 "get_zone_info": false, 00:29:37.474 "zone_management": false, 00:29:37.474 "zone_append": false, 00:29:37.474 "compare": false, 00:29:37.474 "compare_and_write": false, 00:29:37.474 "abort": false, 00:29:37.474 "seek_hole": true, 00:29:37.474 "seek_data": true, 00:29:37.474 "copy": false, 00:29:37.474 "nvme_iov_md": false 00:29:37.474 }, 00:29:37.474 "driver_specific": { 00:29:37.474 "lvol": { 00:29:37.474 "lvol_store_uuid": "14de5de4-b1f6-45cc-b8bd-3f7526c6de71", 00:29:37.474 "base_bdev": "aio_bdev", 00:29:37.474 "thin_provision": false, 00:29:37.474 "num_allocated_clusters": 38, 00:29:37.474 "snapshot": false, 00:29:37.474 "clone": false, 00:29:37.474 "esnap_clone": false 00:29:37.474 } 00:29:37.474 } 00:29:37.474 } 00:29:37.474 ] 00:29:37.474 10:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:37.474 10:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:37.474 10:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:37.732 10:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:37.732 10:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:37.732 10:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:37.990 10:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:37.990 10:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:38.249 [2024-11-20 10:03:14.991565] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:38.249 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:38.507 request: 00:29:38.507 { 00:29:38.507 "uuid": "14de5de4-b1f6-45cc-b8bd-3f7526c6de71", 00:29:38.507 "method": "bdev_lvol_get_lvstores", 00:29:38.507 "req_id": 1 00:29:38.507 } 00:29:38.507 Got JSON-RPC error response 00:29:38.507 response: 00:29:38.507 { 00:29:38.507 "code": -19, 00:29:38.507 "message": "No such device" 00:29:38.507 } 00:29:38.507 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:38.507 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:38.507 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:38.507 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:38.507 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:38.765 aio_bdev 00:29:38.765 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b58f338a-ce76-4870-ac54-4c2eab82eb66 00:29:38.765 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b58f338a-ce76-4870-ac54-4c2eab82eb66 00:29:38.765 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:38.765 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:38.765 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:38.765 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:38.765 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:39.023 10:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b58f338a-ce76-4870-ac54-4c2eab82eb66 -t 2000 00:29:39.282 [ 00:29:39.282 { 00:29:39.282 "name": "b58f338a-ce76-4870-ac54-4c2eab82eb66", 00:29:39.282 "aliases": [ 00:29:39.282 "lvs/lvol" 00:29:39.282 ], 00:29:39.282 "product_name": "Logical Volume", 00:29:39.282 "block_size": 4096, 00:29:39.282 "num_blocks": 38912, 00:29:39.282 "uuid": "b58f338a-ce76-4870-ac54-4c2eab82eb66", 00:29:39.282 "assigned_rate_limits": { 00:29:39.282 "rw_ios_per_sec": 0, 00:29:39.282 "rw_mbytes_per_sec": 0, 00:29:39.282 "r_mbytes_per_sec": 0, 00:29:39.282 "w_mbytes_per_sec": 0 00:29:39.282 }, 00:29:39.282 "claimed": false, 00:29:39.282 "zoned": false, 00:29:39.282 "supported_io_types": { 00:29:39.282 "read": true, 00:29:39.282 "write": true, 00:29:39.282 "unmap": true, 00:29:39.282 "flush": false, 00:29:39.282 "reset": true, 00:29:39.282 "nvme_admin": false, 00:29:39.282 "nvme_io": false, 00:29:39.282 "nvme_io_md": false, 00:29:39.282 "write_zeroes": true, 00:29:39.282 "zcopy": false, 00:29:39.282 "get_zone_info": false, 00:29:39.282 "zone_management": false, 00:29:39.282 "zone_append": false, 00:29:39.282 "compare": false, 00:29:39.282 "compare_and_write": false, 00:29:39.282 "abort": false, 00:29:39.282 "seek_hole": true, 00:29:39.282 "seek_data": true, 00:29:39.282 "copy": false, 00:29:39.282 "nvme_iov_md": false 00:29:39.282 }, 00:29:39.282 "driver_specific": { 00:29:39.282 "lvol": { 00:29:39.282 "lvol_store_uuid": "14de5de4-b1f6-45cc-b8bd-3f7526c6de71", 00:29:39.282 "base_bdev": "aio_bdev", 00:29:39.282 "thin_provision": false, 00:29:39.282 "num_allocated_clusters": 38, 00:29:39.282 "snapshot": false, 00:29:39.282 "clone": false, 00:29:39.282 "esnap_clone": false 00:29:39.282 } 00:29:39.282 } 00:29:39.282 } 00:29:39.282 ] 00:29:39.282 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:39.282 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:39.282 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:39.540 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:39.540 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:39.540 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:39.798 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:39.798 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b58f338a-ce76-4870-ac54-4c2eab82eb66 00:29:40.057 10:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14de5de4-b1f6-45cc-b8bd-3f7526c6de71 00:29:40.315 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:40.882 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.882 00:29:40.882 real 0m19.549s 00:29:40.882 user 0m36.633s 00:29:40.882 sys 0m4.587s 00:29:40.882 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.882 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:40.882 ************************************ 00:29:40.882 END TEST lvs_grow_dirty 00:29:40.882 ************************************ 00:29:40.882 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:40.883 nvmf_trace.0 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.883 rmmod nvme_tcp 00:29:40.883 rmmod nvme_fabrics 00:29:40.883 rmmod nvme_keyring 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3877075 ']' 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3877075 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3877075 ']' 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3877075 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3877075 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3877075' 00:29:40.883 killing process with pid 3877075 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3877075 00:29:40.883 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3877075 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.143 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.048 10:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.048 00:29:43.048 real 0m42.938s 00:29:43.048 user 0m56.060s 00:29:43.048 sys 0m8.357s 00:29:43.048 10:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.048 10:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:43.048 ************************************ 00:29:43.048 END TEST nvmf_lvs_grow 00:29:43.048 ************************************ 00:29:43.048 10:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:43.048 10:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:43.048 10:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.048 10:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:43.307 ************************************ 00:29:43.307 START TEST nvmf_bdev_io_wait 00:29:43.307 ************************************ 00:29:43.307 10:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:43.307 * Looking for test storage... 00:29:43.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:43.307 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:43.307 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:43.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.308 --rc genhtml_branch_coverage=1 00:29:43.308 --rc genhtml_function_coverage=1 00:29:43.308 --rc genhtml_legend=1 00:29:43.308 --rc geninfo_all_blocks=1 00:29:43.308 --rc geninfo_unexecuted_blocks=1 00:29:43.308 00:29:43.308 ' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:43.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.308 --rc genhtml_branch_coverage=1 00:29:43.308 --rc genhtml_function_coverage=1 00:29:43.308 --rc genhtml_legend=1 00:29:43.308 --rc geninfo_all_blocks=1 00:29:43.308 --rc geninfo_unexecuted_blocks=1 00:29:43.308 00:29:43.308 ' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:43.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.308 --rc genhtml_branch_coverage=1 00:29:43.308 --rc genhtml_function_coverage=1 00:29:43.308 --rc genhtml_legend=1 00:29:43.308 --rc geninfo_all_blocks=1 00:29:43.308 --rc geninfo_unexecuted_blocks=1 00:29:43.308 00:29:43.308 ' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:43.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.308 --rc genhtml_branch_coverage=1 00:29:43.308 --rc genhtml_function_coverage=1 00:29:43.308 --rc genhtml_legend=1 00:29:43.308 --rc geninfo_all_blocks=1 00:29:43.308 --rc geninfo_unexecuted_blocks=1 00:29:43.308 00:29:43.308 ' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.308 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.309 10:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:45.215 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:45.216 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:45.216 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:45.216 Found net devices under 0000:09:00.0: cvl_0_0 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:45.216 Found net devices under 0000:09:00.1: cvl_0_1 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.216 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:45.474 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:45.474 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.474 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.474 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.474 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:45.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:29:45.475 00:29:45.475 --- 10.0.0.2 ping statistics --- 00:29:45.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.475 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:29:45.475 00:29:45.475 --- 10.0.0.1 ping statistics --- 00:29:45.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.475 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3879597 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3879597 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3879597 ']' 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.475 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.475 [2024-11-20 10:03:22.339927] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:45.475 [2024-11-20 10:03:22.341066] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:45.475 [2024-11-20 10:03:22.341130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.734 [2024-11-20 10:03:22.416188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:45.734 [2024-11-20 10:03:22.474691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.734 [2024-11-20 10:03:22.474745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.734 [2024-11-20 10:03:22.474773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.734 [2024-11-20 10:03:22.474784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.734 [2024-11-20 10:03:22.474793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.734 [2024-11-20 10:03:22.476361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.734 [2024-11-20 10:03:22.476425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.734 [2024-11-20 10:03:22.476491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:45.734 [2024-11-20 10:03:22.476494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.734 [2024-11-20 10:03:22.477003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.734 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.994 [2024-11-20 10:03:22.693082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:45.994 [2024-11-20 10:03:22.693325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:45.994 [2024-11-20 10:03:22.694246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:45.994 [2024-11-20 10:03:22.695090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.994 [2024-11-20 10:03:22.701246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.994 Malloc0 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:45.994 [2024-11-20 10:03:22.761374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3879634 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3879635 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3879638 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:45.994 { 00:29:45.994 "params": { 00:29:45.994 "name": "Nvme$subsystem", 00:29:45.994 "trtype": "$TEST_TRANSPORT", 00:29:45.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.994 "adrfam": "ipv4", 00:29:45.994 "trsvcid": "$NVMF_PORT", 00:29:45.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.994 "hdgst": ${hdgst:-false}, 00:29:45.994 "ddgst": ${ddgst:-false} 00:29:45.994 }, 00:29:45.994 "method": "bdev_nvme_attach_controller" 00:29:45.994 } 00:29:45.994 EOF 00:29:45.994 )") 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3879640 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:45.994 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:45.994 { 00:29:45.994 "params": { 00:29:45.994 "name": "Nvme$subsystem", 00:29:45.994 "trtype": "$TEST_TRANSPORT", 00:29:45.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.994 "adrfam": "ipv4", 00:29:45.994 "trsvcid": "$NVMF_PORT", 00:29:45.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.995 "hdgst": ${hdgst:-false}, 00:29:45.995 "ddgst": ${ddgst:-false} 00:29:45.995 }, 00:29:45.995 "method": "bdev_nvme_attach_controller" 00:29:45.995 } 00:29:45.995 EOF 00:29:45.995 )") 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:45.995 { 00:29:45.995 "params": { 00:29:45.995 "name": "Nvme$subsystem", 00:29:45.995 "trtype": "$TEST_TRANSPORT", 00:29:45.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.995 "adrfam": "ipv4", 00:29:45.995 "trsvcid": "$NVMF_PORT", 00:29:45.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.995 "hdgst": ${hdgst:-false}, 00:29:45.995 "ddgst": ${ddgst:-false} 00:29:45.995 }, 00:29:45.995 "method": "bdev_nvme_attach_controller" 00:29:45.995 } 00:29:45.995 EOF 00:29:45.995 )") 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:45.995 { 00:29:45.995 "params": { 00:29:45.995 "name": "Nvme$subsystem", 00:29:45.995 "trtype": "$TEST_TRANSPORT", 00:29:45.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.995 "adrfam": "ipv4", 00:29:45.995 "trsvcid": "$NVMF_PORT", 00:29:45.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.995 "hdgst": ${hdgst:-false}, 00:29:45.995 "ddgst": ${ddgst:-false} 00:29:45.995 }, 00:29:45.995 "method": "bdev_nvme_attach_controller" 00:29:45.995 } 00:29:45.995 EOF 00:29:45.995 )") 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3879634 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:45.995 "params": { 00:29:45.995 "name": "Nvme1", 00:29:45.995 "trtype": "tcp", 00:29:45.995 "traddr": "10.0.0.2", 00:29:45.995 "adrfam": "ipv4", 00:29:45.995 "trsvcid": "4420", 00:29:45.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:45.995 "hdgst": false, 00:29:45.995 "ddgst": false 00:29:45.995 }, 00:29:45.995 "method": "bdev_nvme_attach_controller" 00:29:45.995 }' 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:45.995 "params": { 00:29:45.995 "name": "Nvme1", 00:29:45.995 "trtype": "tcp", 00:29:45.995 "traddr": "10.0.0.2", 00:29:45.995 "adrfam": "ipv4", 00:29:45.995 "trsvcid": "4420", 00:29:45.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:45.995 "hdgst": false, 00:29:45.995 "ddgst": false 00:29:45.995 }, 00:29:45.995 "method": "bdev_nvme_attach_controller" 00:29:45.995 }' 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:45.995 "params": { 00:29:45.995 "name": "Nvme1", 00:29:45.995 "trtype": "tcp", 00:29:45.995 "traddr": "10.0.0.2", 00:29:45.995 "adrfam": "ipv4", 00:29:45.995 "trsvcid": "4420", 00:29:45.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:45.995 "hdgst": false, 00:29:45.995 "ddgst": false 00:29:45.995 }, 00:29:45.995 "method": "bdev_nvme_attach_controller" 00:29:45.995 }' 00:29:45.995 10:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:45.995 "params": { 00:29:45.995 "name": "Nvme1", 00:29:45.995 "trtype": "tcp", 00:29:45.995 "traddr": "10.0.0.2", 00:29:45.995 "adrfam": "ipv4", 00:29:45.995 "trsvcid": "4420", 00:29:45.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:45.995 "hdgst": false, 00:29:45.995 "ddgst": false 00:29:45.995 }, 00:29:45.995 "method": "bdev_nvme_attach_controller" 00:29:45.995 }' 00:29:45.995 [2024-11-20 10:03:22.814165] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:45.995 [2024-11-20 10:03:22.814165] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:45.995 [2024-11-20 10:03:22.814165] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:45.995 [2024-11-20 10:03:22.814253] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 10:03:22.814253] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-11-20 10:03:22.814252] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:29:45.995 --proc-type=auto ] 00:29:45.995 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:45.995 [2024-11-20 10:03:22.814472] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:45.995 [2024-11-20 10:03:22.814540] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:46.278 [2024-11-20 10:03:23.009625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.278 [2024-11-20 10:03:23.065525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:46.278 [2024-11-20 10:03:23.115535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.618 [2024-11-20 10:03:23.173032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:46.618 [2024-11-20 10:03:23.218093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.618 [2024-11-20 10:03:23.275957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:46.618 [2024-11-20 10:03:23.299524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.618 [2024-11-20 10:03:23.352508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:46.618 Running I/O for 1 seconds... 00:29:46.618 Running I/O for 1 seconds... 00:29:46.618 Running I/O for 1 seconds... 00:29:46.876 Running I/O for 1 seconds... 00:29:47.810 9637.00 IOPS, 37.64 MiB/s 00:29:47.810 Latency(us) 00:29:47.810 [2024-11-20T09:03:24.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.810 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:47.810 Nvme1n1 : 1.01 9697.81 37.88 0.00 0.00 13144.86 4708.88 15534.46 00:29:47.810 [2024-11-20T09:03:24.724Z] =================================================================================================================== 00:29:47.810 [2024-11-20T09:03:24.724Z] Total : 9697.81 37.88 0.00 0.00 13144.86 4708.88 15534.46 00:29:47.810 187152.00 IOPS, 731.06 MiB/s 00:29:47.810 Latency(us) 00:29:47.810 [2024-11-20T09:03:24.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.810 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:47.810 Nvme1n1 : 1.00 186796.87 729.68 0.00 0.00 681.45 291.27 1881.13 00:29:47.810 [2024-11-20T09:03:24.724Z] =================================================================================================================== 00:29:47.810 [2024-11-20T09:03:24.724Z] Total : 186796.87 729.68 0.00 0.00 681.45 291.27 1881.13 00:29:47.810 8241.00 IOPS, 32.19 MiB/s 00:29:47.810 Latency(us) 00:29:47.810 [2024-11-20T09:03:24.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.810 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:47.810 Nvme1n1 : 1.01 8288.32 32.38 0.00 0.00 15362.47 4538.97 17864.63 00:29:47.810 [2024-11-20T09:03:24.724Z] =================================================================================================================== 00:29:47.810 [2024-11-20T09:03:24.724Z] Total : 8288.32 32.38 0.00 0.00 15362.47 4538.97 17864.63 00:29:47.810 9829.00 IOPS, 38.39 MiB/s 00:29:47.810 Latency(us) 00:29:47.810 [2024-11-20T09:03:24.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.810 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:47.810 Nvme1n1 : 1.01 9912.31 38.72 0.00 0.00 12872.13 2475.80 19418.07 00:29:47.810 [2024-11-20T09:03:24.724Z] =================================================================================================================== 00:29:47.810 [2024-11-20T09:03:24.724Z] Total : 9912.31 38.72 0.00 0.00 12872.13 2475.80 19418.07 00:29:47.810 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3879635 00:29:47.810 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3879638 00:29:47.810 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3879640 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.068 rmmod nvme_tcp 00:29:48.068 rmmod nvme_fabrics 00:29:48.068 rmmod nvme_keyring 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3879597 ']' 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3879597 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3879597 ']' 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3879597 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3879597 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3879597' 00:29:48.068 killing process with pid 3879597 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3879597 00:29:48.068 10:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3879597 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.326 10:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.859 00:29:50.859 real 0m7.202s 00:29:50.859 user 0m14.009s 00:29:50.859 sys 0m4.202s 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.859 ************************************ 00:29:50.859 END TEST nvmf_bdev_io_wait 00:29:50.859 ************************************ 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:50.859 ************************************ 00:29:50.859 START TEST nvmf_queue_depth 00:29:50.859 ************************************ 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:50.859 * Looking for test storage... 00:29:50.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:50.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.859 --rc genhtml_branch_coverage=1 00:29:50.859 --rc genhtml_function_coverage=1 00:29:50.859 --rc genhtml_legend=1 00:29:50.859 --rc geninfo_all_blocks=1 00:29:50.859 --rc geninfo_unexecuted_blocks=1 00:29:50.859 00:29:50.859 ' 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:50.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.859 --rc genhtml_branch_coverage=1 00:29:50.859 --rc genhtml_function_coverage=1 00:29:50.859 --rc genhtml_legend=1 00:29:50.859 --rc geninfo_all_blocks=1 00:29:50.859 --rc geninfo_unexecuted_blocks=1 00:29:50.859 00:29:50.859 ' 00:29:50.859 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:50.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.859 --rc genhtml_branch_coverage=1 00:29:50.859 --rc genhtml_function_coverage=1 00:29:50.859 --rc genhtml_legend=1 00:29:50.859 --rc geninfo_all_blocks=1 00:29:50.859 --rc geninfo_unexecuted_blocks=1 00:29:50.859 00:29:50.860 ' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:50.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.860 --rc genhtml_branch_coverage=1 00:29:50.860 --rc genhtml_function_coverage=1 00:29:50.860 --rc genhtml_legend=1 00:29:50.860 --rc geninfo_all_blocks=1 00:29:50.860 --rc geninfo_unexecuted_blocks=1 00:29:50.860 00:29:50.860 ' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.860 10:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:52.762 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:52.762 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.762 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:52.763 Found net devices under 0000:09:00.0: cvl_0_0 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:52.763 Found net devices under 0000:09:00.1: cvl_0_1 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:29:52.763 00:29:52.763 --- 10.0.0.2 ping statistics --- 00:29:52.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.763 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:29:52.763 00:29:52.763 --- 10.0.0.1 ping statistics --- 00:29:52.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.763 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.763 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3881864 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3881864 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3881864 ']' 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.022 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.022 [2024-11-20 10:03:29.741056] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:53.022 [2024-11-20 10:03:29.742156] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:53.022 [2024-11-20 10:03:29.742214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.022 [2024-11-20 10:03:29.820041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.022 [2024-11-20 10:03:29.879967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.022 [2024-11-20 10:03:29.880019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.022 [2024-11-20 10:03:29.880047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.022 [2024-11-20 10:03:29.880057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.022 [2024-11-20 10:03:29.880067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.022 [2024-11-20 10:03:29.880675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.280 [2024-11-20 10:03:29.968649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.280 [2024-11-20 10:03:29.968956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:53.280 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.280 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:53.280 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.280 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.280 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.280 [2024-11-20 10:03:30.021405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.280 Malloc0 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.280 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.281 [2024-11-20 10:03:30.085400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3881998 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3881998 /var/tmp/bdevperf.sock 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3881998 ']' 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:53.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.281 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.281 [2024-11-20 10:03:30.137539] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:29:53.281 [2024-11-20 10:03:30.137628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881998 ] 00:29:53.539 [2024-11-20 10:03:30.207487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.539 [2024-11-20 10:03:30.271614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.539 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.539 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:53.539 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:53.539 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.539 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.797 NVMe0n1 00:29:53.797 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.797 10:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:53.797 Running I/O for 10 seconds... 00:29:55.663 8192.00 IOPS, 32.00 MiB/s [2024-11-20T09:03:33.950Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-20T09:03:34.884Z] 8193.33 IOPS, 32.01 MiB/s [2024-11-20T09:03:35.817Z] 8193.00 IOPS, 32.00 MiB/s [2024-11-20T09:03:36.751Z] 8194.80 IOPS, 32.01 MiB/s [2024-11-20T09:03:37.684Z] 8196.67 IOPS, 32.02 MiB/s [2024-11-20T09:03:38.617Z] 8199.14 IOPS, 32.03 MiB/s [2024-11-20T09:03:39.611Z] 8200.50 IOPS, 32.03 MiB/s [2024-11-20T09:03:40.983Z] 8218.89 IOPS, 32.11 MiB/s [2024-11-20T09:03:40.983Z] 8209.00 IOPS, 32.07 MiB/s 00:30:04.069 Latency(us) 00:30:04.069 [2024-11-20T09:03:40.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.069 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:04.069 Verification LBA range: start 0x0 length 0x4000 00:30:04.069 NVMe0n1 : 10.07 8255.77 32.25 0.00 0.00 123469.76 5534.15 72235.24 00:30:04.069 [2024-11-20T09:03:40.983Z] =================================================================================================================== 00:30:04.069 [2024-11-20T09:03:40.983Z] Total : 8255.77 32.25 0.00 0.00 123469.76 5534.15 72235.24 00:30:04.069 { 00:30:04.069 "results": [ 00:30:04.069 { 00:30:04.069 "job": "NVMe0n1", 00:30:04.069 "core_mask": "0x1", 00:30:04.069 "workload": "verify", 00:30:04.069 "status": "finished", 00:30:04.069 "verify_range": { 00:30:04.069 "start": 0, 00:30:04.069 "length": 16384 00:30:04.069 }, 00:30:04.069 "queue_depth": 1024, 00:30:04.069 "io_size": 4096, 00:30:04.069 "runtime": 10.067379, 00:30:04.069 "iops": 8255.77342424478, 00:30:04.069 "mibps": 32.249114938456174, 00:30:04.069 "io_failed": 0, 00:30:04.069 "io_timeout": 0, 00:30:04.069 "avg_latency_us": 123469.75998488466, 00:30:04.069 "min_latency_us": 5534.151111111111, 00:30:04.069 "max_latency_us": 72235.23555555556 00:30:04.069 } 00:30:04.069 ], 00:30:04.069 "core_count": 1 00:30:04.069 } 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3881998 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3881998 ']' 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3881998 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3881998 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3881998' 00:30:04.069 killing process with pid 3881998 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3881998 00:30:04.069 Received shutdown signal, test time was about 10.000000 seconds 00:30:04.069 00:30:04.069 Latency(us) 00:30:04.069 [2024-11-20T09:03:40.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.069 [2024-11-20T09:03:40.983Z] =================================================================================================================== 00:30:04.069 [2024-11-20T09:03:40.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3881998 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.069 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.069 rmmod nvme_tcp 00:30:04.069 rmmod nvme_fabrics 00:30:04.069 rmmod nvme_keyring 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3881864 ']' 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3881864 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3881864 ']' 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3881864 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.327 10:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3881864 00:30:04.327 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.327 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.327 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3881864' 00:30:04.327 killing process with pid 3881864 00:30:04.327 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3881864 00:30:04.327 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3881864 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.587 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.493 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.493 00:30:06.493 real 0m16.111s 00:30:06.493 user 0m21.106s 00:30:06.493 sys 0m3.866s 00:30:06.493 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.493 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:06.493 ************************************ 00:30:06.493 END TEST nvmf_queue_depth 00:30:06.493 ************************************ 00:30:06.493 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:06.493 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.493 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.493 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.493 ************************************ 00:30:06.493 START TEST nvmf_target_multipath 00:30:06.493 ************************************ 00:30:06.493 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:06.753 * Looking for test storage... 00:30:06.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.753 --rc genhtml_branch_coverage=1 00:30:06.753 --rc genhtml_function_coverage=1 00:30:06.753 --rc genhtml_legend=1 00:30:06.753 --rc geninfo_all_blocks=1 00:30:06.753 --rc geninfo_unexecuted_blocks=1 00:30:06.753 00:30:06.753 ' 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.753 --rc genhtml_branch_coverage=1 00:30:06.753 --rc genhtml_function_coverage=1 00:30:06.753 --rc genhtml_legend=1 00:30:06.753 --rc geninfo_all_blocks=1 00:30:06.753 --rc geninfo_unexecuted_blocks=1 00:30:06.753 00:30:06.753 ' 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.753 --rc genhtml_branch_coverage=1 00:30:06.753 --rc genhtml_function_coverage=1 00:30:06.753 --rc genhtml_legend=1 00:30:06.753 --rc geninfo_all_blocks=1 00:30:06.753 --rc geninfo_unexecuted_blocks=1 00:30:06.753 00:30:06.753 ' 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.753 --rc genhtml_branch_coverage=1 00:30:06.753 --rc genhtml_function_coverage=1 00:30:06.753 --rc genhtml_legend=1 00:30:06.753 --rc geninfo_all_blocks=1 00:30:06.753 --rc geninfo_unexecuted_blocks=1 00:30:06.753 00:30:06.753 ' 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.753 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.754 10:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:09.286 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:09.286 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:09.286 Found net devices under 0000:09:00.0: cvl_0_0 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:09.286 Found net devices under 0000:09:00.1: cvl_0_1 00:30:09.286 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:30:09.287 00:30:09.287 --- 10.0.0.2 ping statistics --- 00:30:09.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.287 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:30:09.287 00:30:09.287 --- 10.0.0.1 ping statistics --- 00:30:09.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.287 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:09.287 only one NIC for nvmf test 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:09.287 rmmod nvme_tcp 00:30:09.287 rmmod nvme_fabrics 00:30:09.287 rmmod nvme_keyring 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.287 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.191 10:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.191 00:30:11.191 real 0m4.639s 00:30:11.191 user 0m0.945s 00:30:11.191 sys 0m1.701s 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:11.191 ************************************ 00:30:11.191 END TEST nvmf_target_multipath 00:30:11.191 ************************************ 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:11.191 ************************************ 00:30:11.191 START TEST nvmf_zcopy 00:30:11.191 ************************************ 00:30:11.191 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:11.451 * Looking for test storage... 00:30:11.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:11.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.451 --rc genhtml_branch_coverage=1 00:30:11.451 --rc genhtml_function_coverage=1 00:30:11.451 --rc genhtml_legend=1 00:30:11.451 --rc geninfo_all_blocks=1 00:30:11.451 --rc geninfo_unexecuted_blocks=1 00:30:11.451 00:30:11.451 ' 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:11.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.451 --rc genhtml_branch_coverage=1 00:30:11.451 --rc genhtml_function_coverage=1 00:30:11.451 --rc genhtml_legend=1 00:30:11.451 --rc geninfo_all_blocks=1 00:30:11.451 --rc geninfo_unexecuted_blocks=1 00:30:11.451 00:30:11.451 ' 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:11.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.451 --rc genhtml_branch_coverage=1 00:30:11.451 --rc genhtml_function_coverage=1 00:30:11.451 --rc genhtml_legend=1 00:30:11.451 --rc geninfo_all_blocks=1 00:30:11.451 --rc geninfo_unexecuted_blocks=1 00:30:11.451 00:30:11.451 ' 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:11.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.451 --rc genhtml_branch_coverage=1 00:30:11.451 --rc genhtml_function_coverage=1 00:30:11.451 --rc genhtml_legend=1 00:30:11.451 --rc geninfo_all_blocks=1 00:30:11.451 --rc geninfo_unexecuted_blocks=1 00:30:11.451 00:30:11.451 ' 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.451 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:11.452 10:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:13.356 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:13.357 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:13.357 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:13.357 Found net devices under 0000:09:00.0: cvl_0_0 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:13.357 Found net devices under 0000:09:00.1: cvl_0_1 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.357 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:13.358 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:13.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:30:13.616 00:30:13.616 --- 10.0.0.2 ping statistics --- 00:30:13.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.616 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:30:13.616 00:30:13.616 --- 10.0.0.1 ping statistics --- 00:30:13.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.616 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.616 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3887084 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3887084 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3887084 ']' 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.617 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:13.617 [2024-11-20 10:03:50.465114] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:13.617 [2024-11-20 10:03:50.466378] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:30:13.617 [2024-11-20 10:03:50.466440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.875 [2024-11-20 10:03:50.545520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.875 [2024-11-20 10:03:50.609146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.875 [2024-11-20 10:03:50.609200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.875 [2024-11-20 10:03:50.609229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.875 [2024-11-20 10:03:50.609240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.875 [2024-11-20 10:03:50.609257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.875 [2024-11-20 10:03:50.609966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.875 [2024-11-20 10:03:50.711350] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:13.875 [2024-11-20 10:03:50.711679] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:13.875 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.875 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:13.875 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.875 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:13.875 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:13.876 [2024-11-20 10:03:50.762565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:13.876 [2024-11-20 10:03:50.778751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.876 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:14.134 malloc0 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:14.134 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:14.134 { 00:30:14.134 "params": { 00:30:14.134 "name": "Nvme$subsystem", 00:30:14.134 "trtype": "$TEST_TRANSPORT", 00:30:14.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:14.134 "adrfam": "ipv4", 00:30:14.134 "trsvcid": "$NVMF_PORT", 00:30:14.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:14.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:14.134 "hdgst": ${hdgst:-false}, 00:30:14.134 "ddgst": ${ddgst:-false} 00:30:14.134 }, 00:30:14.134 "method": "bdev_nvme_attach_controller" 00:30:14.134 } 00:30:14.134 EOF 00:30:14.134 )") 00:30:14.135 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:14.135 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:14.135 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:14.135 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:14.135 "params": { 00:30:14.135 "name": "Nvme1", 00:30:14.135 "trtype": "tcp", 00:30:14.135 "traddr": "10.0.0.2", 00:30:14.135 "adrfam": "ipv4", 00:30:14.135 "trsvcid": "4420", 00:30:14.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:14.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:14.135 "hdgst": false, 00:30:14.135 "ddgst": false 00:30:14.135 }, 00:30:14.135 "method": "bdev_nvme_attach_controller" 00:30:14.135 }' 00:30:14.135 [2024-11-20 10:03:50.862831] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:30:14.135 [2024-11-20 10:03:50.862903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887199 ] 00:30:14.135 [2024-11-20 10:03:50.929628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.135 [2024-11-20 10:03:50.994827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.701 Running I/O for 10 seconds... 00:30:16.569 4901.00 IOPS, 38.29 MiB/s [2024-11-20T09:03:54.416Z] 4969.00 IOPS, 38.82 MiB/s [2024-11-20T09:03:55.362Z] 4971.33 IOPS, 38.84 MiB/s [2024-11-20T09:03:56.736Z] 4997.50 IOPS, 39.04 MiB/s [2024-11-20T09:03:57.687Z] 4996.60 IOPS, 39.04 MiB/s [2024-11-20T09:03:58.621Z] 4992.00 IOPS, 39.00 MiB/s [2024-11-20T09:03:59.555Z] 4988.57 IOPS, 38.97 MiB/s [2024-11-20T09:04:00.487Z] 4997.12 IOPS, 39.04 MiB/s [2024-11-20T09:04:01.423Z] 4998.78 IOPS, 39.05 MiB/s [2024-11-20T09:04:01.423Z] 5002.10 IOPS, 39.08 MiB/s 00:30:24.509 Latency(us) 00:30:24.509 [2024-11-20T09:04:01.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:24.509 Verification LBA range: start 0x0 length 0x1000 00:30:24.509 Nvme1n1 : 10.02 5004.13 39.09 0.00 0.00 25510.47 2682.12 31845.64 00:30:24.509 [2024-11-20T09:04:01.423Z] =================================================================================================================== 00:30:24.509 [2024-11-20T09:04:01.423Z] Total : 5004.13 39.09 0.00 0.00 25510.47 2682.12 31845.64 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3888388 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:24.766 { 00:30:24.766 "params": { 00:30:24.766 "name": "Nvme$subsystem", 00:30:24.766 "trtype": "$TEST_TRANSPORT", 00:30:24.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.766 "adrfam": "ipv4", 00:30:24.766 "trsvcid": "$NVMF_PORT", 00:30:24.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.766 "hdgst": ${hdgst:-false}, 00:30:24.766 "ddgst": ${ddgst:-false} 00:30:24.766 }, 00:30:24.766 "method": "bdev_nvme_attach_controller" 00:30:24.766 } 00:30:24.766 EOF 00:30:24.766 )") 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:24.766 [2024-11-20 10:04:01.602522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.602564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:24.766 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:24.766 "params": { 00:30:24.766 "name": "Nvme1", 00:30:24.766 "trtype": "tcp", 00:30:24.766 "traddr": "10.0.0.2", 00:30:24.766 "adrfam": "ipv4", 00:30:24.766 "trsvcid": "4420", 00:30:24.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:24.766 "hdgst": false, 00:30:24.766 "ddgst": false 00:30:24.766 }, 00:30:24.766 "method": "bdev_nvme_attach_controller" 00:30:24.766 }' 00:30:24.766 [2024-11-20 10:04:01.610457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.610480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 [2024-11-20 10:04:01.618452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.618474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 [2024-11-20 10:04:01.626449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.626469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 [2024-11-20 10:04:01.634449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.634470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 [2024-11-20 10:04:01.642452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.642473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 [2024-11-20 10:04:01.642932] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:30:24.766 [2024-11-20 10:04:01.643002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888388 ] 00:30:24.766 [2024-11-20 10:04:01.650452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.650473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 [2024-11-20 10:04:01.658448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.658481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 [2024-11-20 10:04:01.666449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.666470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:24.766 [2024-11-20 10:04:01.674450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:24.766 [2024-11-20 10:04:01.674471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.024 [2024-11-20 10:04:01.682452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.024 [2024-11-20 10:04:01.682472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.024 [2024-11-20 10:04:01.690454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.024 [2024-11-20 10:04:01.690475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.024 [2024-11-20 10:04:01.698456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.024 [2024-11-20 10:04:01.698477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.024 [2024-11-20 10:04:01.706450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.024 [2024-11-20 10:04:01.706470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.024 [2024-11-20 10:04:01.712494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.024 [2024-11-20 10:04:01.714450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.024 [2024-11-20 10:04:01.714470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.024 [2024-11-20 10:04:01.722486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.024 [2024-11-20 10:04:01.722521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.024 [2024-11-20 10:04:01.730485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.024 [2024-11-20 10:04:01.730521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.024 [2024-11-20 10:04:01.738450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.738471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.746453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.746473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.754449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.754469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.762448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.762468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.770449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.770469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.774801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.025 [2024-11-20 10:04:01.778451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.778471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.786452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.786472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.794484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.794516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.802490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.802533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.810490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.810525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.818485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.818521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.826484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.826517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.834487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.834521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.842480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.842510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.850456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.850477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.858484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.858517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.866488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.866523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.874462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.874487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.882457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.882480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.890651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.890688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.898476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.898503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.906457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.906496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.914458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.914482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.922454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.922477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.025 [2024-11-20 10:04:01.930452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.025 [2024-11-20 10:04:01.930473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:01.938450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:01.938471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:01.946453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:01.946475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:01.954455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:01.954484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:01.962459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:01.962483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:01.970459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:01.970483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:01.978461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:01.978486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:01.986457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:01.986482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:01.994454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:01.994479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.002455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.002479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 Running I/O for 5 seconds... 00:30:25.283 [2024-11-20 10:04:02.017155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.017192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.030439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.030468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.040767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.040794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.053000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.053025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.067091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.067118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.077187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.077212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.092617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.092642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.107939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.107965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.117443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.117470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.133599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.133624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.146845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.146871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.156297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.156345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.168007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.168036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.283 [2024-11-20 10:04:02.185696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.283 [2024-11-20 10:04:02.185720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.195724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.195766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.207696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.207719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.223551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.223575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.232960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.232984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.247998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.248023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.257463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.257490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.272715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.272739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.282718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.282744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.294532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.294558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.305804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.305829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.318507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.318535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.328124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.328149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.342988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.343013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.352573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.542 [2024-11-20 10:04:02.352614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.542 [2024-11-20 10:04:02.364819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.543 [2024-11-20 10:04:02.364843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.543 [2024-11-20 10:04:02.379803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.543 [2024-11-20 10:04:02.379828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.543 [2024-11-20 10:04:02.389470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.543 [2024-11-20 10:04:02.389496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.543 [2024-11-20 10:04:02.404547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.543 [2024-11-20 10:04:02.404589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.543 [2024-11-20 10:04:02.413722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.543 [2024-11-20 10:04:02.413747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.543 [2024-11-20 10:04:02.425675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.543 [2024-11-20 10:04:02.425700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.543 [2024-11-20 10:04:02.441346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.543 [2024-11-20 10:04:02.441371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.456984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.457009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.472097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.472122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.482078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.482102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.493973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.493997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.504941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.504964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.517797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.517821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.527532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.527557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.539485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.539511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.550709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.550733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.561927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.561951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.574690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.574716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.584117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.584142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.595959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.595984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.610398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.610426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.620287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.620326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.632375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.632401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.648766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.648792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.658400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.658427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.670431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.670458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.681665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.681689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.694635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.694661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.801 [2024-11-20 10:04:02.704018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.801 [2024-11-20 10:04:02.704042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.716129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.716156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.732412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.732448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.742002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.742040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.753982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.754006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.767851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.767878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.777454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.777480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.791495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.791520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.801468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.801493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.816551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.816591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.826450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.826476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.838681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.838705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.849416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.849442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.863475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.863502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.873214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.873239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.887952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.887976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.897746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.897770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.909733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.909757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.921237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.921261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.935795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.935820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.945674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.945698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.059 [2024-11-20 10:04:02.957894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.059 [2024-11-20 10:04:02.957920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.317 [2024-11-20 10:04:02.972530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.317 [2024-11-20 10:04:02.972556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.317 [2024-11-20 10:04:02.981906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.317 [2024-11-20 10:04:02.981932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.317 [2024-11-20 10:04:02.993414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:02.993440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.006504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.006531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 11483.00 IOPS, 89.71 MiB/s [2024-11-20T09:04:03.232Z] [2024-11-20 10:04:03.017203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.017227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.030058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.030085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.039610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.039649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.051970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.051995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.068015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.068056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.078018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.078049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.090017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.090043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.101109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.101149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.114186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.114212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.124002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.124028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.136126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.136151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.150740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.150766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.160243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.160267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.172172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.172196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.183282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.183317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.194559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.194599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.205430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.205457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.218406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.218447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.318 [2024-11-20 10:04:03.228222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.318 [2024-11-20 10:04:03.228248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.239848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.239873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.250136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.250175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.261649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.261672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.274217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.274259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.283985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.284009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.300013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.300045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.309817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.309842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.321818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.321841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.332859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.332882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.347256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.347282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.356804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.356828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.372885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.372908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.387923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.387949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.397310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.397334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.408959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.408984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.422028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.422055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.431676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.431700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.447969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.447994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.457353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.457379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.471201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.471225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.576 [2024-11-20 10:04:03.481208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.576 [2024-11-20 10:04:03.481231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.495546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.495573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.505251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.505276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.520310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.520335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.534812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.534858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.544392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.544419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.556677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.556701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.572763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.572787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.589042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.589067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.603934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.603961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.613333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.613361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.627271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.627318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.636847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.636872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.651287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.651333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.660824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.660848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.675598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.675636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.685117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.685141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.699200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.699240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.708812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.708836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.723311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.723337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.732776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.732800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.835 [2024-11-20 10:04:03.744851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.835 [2024-11-20 10:04:03.744875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.758039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.758066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.767943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.767976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.784241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.784266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.801090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.801115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.816655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.816682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.832949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.832975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.848988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.849014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.864929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.864954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.874620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.874658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.886195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.886219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.896228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.896268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.908232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.908256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.923925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.923966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.933461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.933487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.948495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.948520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.958493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.958519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.970360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.970389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.981515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.981542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.093 [2024-11-20 10:04:03.995804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.093 [2024-11-20 10:04:03.995831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.005483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.005509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 11505.50 IOPS, 89.89 MiB/s [2024-11-20T09:04:04.266Z] [2024-11-20 10:04:04.020013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.020039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.030095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.030119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.042436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.042461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.052873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.052898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.067117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.067142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.076717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.076741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.088549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.088589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.102848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.102872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.112867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.112891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.124557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.124596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.140739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.140768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.150024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.150050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.161979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.162003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.172742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.172768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.187881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.187905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.197552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.197593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.211479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.211506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.221292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.221326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.235613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.235638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.245115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.245140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.352 [2024-11-20 10:04:04.259417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.352 [2024-11-20 10:04:04.259443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.268518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.268545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.280272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.280323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.296739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.296779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.314458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.314483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.324634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.324674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.338585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.338610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.348230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.348255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.362422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.362463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.372457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.372484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.384260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.384310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.398832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.611 [2024-11-20 10:04:04.398858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.611 [2024-11-20 10:04:04.408842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.408868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.612 [2024-11-20 10:04:04.420150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.420174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.612 [2024-11-20 10:04:04.437257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.437284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.612 [2024-11-20 10:04:04.452963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.452989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.612 [2024-11-20 10:04:04.468865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.468905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.612 [2024-11-20 10:04:04.483992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.484019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.612 [2024-11-20 10:04:04.493184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.493208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.612 [2024-11-20 10:04:04.507731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.507756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.612 [2024-11-20 10:04:04.517058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.612 [2024-11-20 10:04:04.517084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.531549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.531577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.541469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.541496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.556127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.556168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.565430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.565456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.579399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.579437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.589382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.589408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.603182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.603222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.613050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.613077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.626942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.626966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.636594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.636619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.648605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.648630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.664624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.664649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.674380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.674407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.686193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.686218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.697072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.697096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.712534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.712571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.730261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.730286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.741159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.741183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.756246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.756287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.765679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.765704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.870 [2024-11-20 10:04:04.779669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.870 [2024-11-20 10:04:04.779710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.789264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.789312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.804243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.804269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.813771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.813796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.825658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.825683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.836628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.836655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.852479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.852507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.862148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.862172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.873875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.873900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.884741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.884766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.898890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.898931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.908723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.908748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.920328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.920369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.938361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.938390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.948127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.948160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.960084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.960124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.975717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.975744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.985362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.985390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:04.999543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:04.999577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:05.009382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:05.009410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 11552.33 IOPS, 90.25 MiB/s [2024-11-20T09:04:05.043Z] [2024-11-20 10:04:05.025601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:05.025629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.129 [2024-11-20 10:04:05.040396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.129 [2024-11-20 10:04:05.040433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.387 [2024-11-20 10:04:05.050000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.387 [2024-11-20 10:04:05.050027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.061773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.061798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.074808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.074836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.083941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.083966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.100201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.100226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.118388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.118415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.128084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.128109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.139816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.139857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.154983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.155026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.164890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.164914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.176823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.176863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.191135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.191178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.200982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.201007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.216708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.216734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.226828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.226853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.238381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.238408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.248358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.248384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.262142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.262166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.271859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.271885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.287528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.287554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.388 [2024-11-20 10:04:05.297510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.388 [2024-11-20 10:04:05.297538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.312458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.312485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.321889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.321914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.333867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.333895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.344937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.344963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.358940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.358982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.368572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.368599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.380466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.380493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.396131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.396172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.406083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.646 [2024-11-20 10:04:05.406109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.646 [2024-11-20 10:04:05.417894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.417920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.432711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.432752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.442690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.442739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.454918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.454944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.466189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.466213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.477273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.477298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.490160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.490200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.500137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.500161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.511901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.511925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.528281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.528328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.537504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.537531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.647 [2024-11-20 10:04:05.552908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.647 [2024-11-20 10:04:05.552932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.562860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.562885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.576565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.576608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.594543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.594571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.604975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.604999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.621233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.621272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.637122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.637147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.653226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.653264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.667637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.667663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.677481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.677508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.692463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.692488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.702537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.702562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.714613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.714652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.724865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.724889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.738003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.738042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.747755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.747778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.763906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.763930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.783023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.783047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.793359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.793384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.905 [2024-11-20 10:04:05.808025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.905 [2024-11-20 10:04:05.808049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.817631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.817658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.831844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.831867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.841661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.841685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.856378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.856404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.866179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.866204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.878556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.878602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.889691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.889714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.902710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.902736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.912758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.912783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.925107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.925131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.940379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.940405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.949628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.949668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.964713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.964736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.982842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.982866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:05.992687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:05.992711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:06.007179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:06.007215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 11555.50 IOPS, 90.28 MiB/s [2024-11-20T09:04:06.077Z] [2024-11-20 10:04:06.016934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:06.016959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:06.032011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:06.032036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:06.042389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:06.042416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:06.054289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:06.054334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.163 [2024-11-20 10:04:06.065252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.163 [2024-11-20 10:04:06.065275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.080337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.080362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.090428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.090455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.102479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.102504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.112969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.112992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.129016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.129062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.143796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.143823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.153353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.153379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.167380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.167406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.177231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.177256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.189065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.189105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.201662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.201688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.214816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.214843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.224206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.224231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.236401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.236427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.253191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.253217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.269207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.269246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.285312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.285339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.297735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.297775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.311086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.311112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.422 [2024-11-20 10:04:06.320828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.422 [2024-11-20 10:04:06.320852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.423 [2024-11-20 10:04:06.332657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.423 [2024-11-20 10:04:06.332683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.348926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.348952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.364679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.364718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.374006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.374038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.386296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.386328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.397699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.397738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.411021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.411046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.421002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.421029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.435088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.435113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.444872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.444897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.458932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.458958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.468061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.468086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.484023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.484050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.493419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.493447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.507447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.507476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.516763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.516788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.531211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.531236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.540929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.540966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.556295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.556344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.565663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.565703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.579074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.579115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.681 [2024-11-20 10:04:06.588319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.681 [2024-11-20 10:04:06.588359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.600133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.600167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.616330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.616357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.625984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.626009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.638248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.638274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.650920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.650946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.660163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.660188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.671914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.671939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.682734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.682759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.693532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.693559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.707192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.707216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.716974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.716999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.728668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.728692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.743527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.743554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.753437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.753464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.766852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.766894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.776361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.776389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.788027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.788053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.804789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.804813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.822268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.939 [2024-11-20 10:04:06.822315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.939 [2024-11-20 10:04:06.832924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.940 [2024-11-20 10:04:06.832949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.940 [2024-11-20 10:04:06.847736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.940 [2024-11-20 10:04:06.847763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.857183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.857209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.871147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.871171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.880501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.880527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.892680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.892721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.907597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.907623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.917340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.917366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.932258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.932285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.941808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.941832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.953335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.953361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.965983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.966009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.975759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.975784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:06.987913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:06.987937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.003339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.003366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.012751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.012775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 11563.40 IOPS, 90.34 MiB/s [2024-11-20T09:04:07.112Z] [2024-11-20 10:04:07.022962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.022990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 00:30:30.198 Latency(us) 00:30:30.198 [2024-11-20T09:04:07.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.198 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:30.198 Nvme1n1 : 5.01 11569.81 90.39 0.00 0.00 11048.79 3058.35 18155.90 00:30:30.198 [2024-11-20T09:04:07.112Z] =================================================================================================================== 00:30:30.198 [2024-11-20T09:04:07.112Z] Total : 11569.81 90.39 0.00 0.00 11048.79 3058.35 18155.90 00:30:30.198 [2024-11-20 10:04:07.030460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.030487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.038464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.038493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.046471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.046495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.054513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.054562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.062518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.062566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.070513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.070559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.078512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.078560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.086509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.086553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.094514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.094564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.198 [2024-11-20 10:04:07.102509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.198 [2024-11-20 10:04:07.102557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.110512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.110554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.118513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.118562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.126516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.126564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.134528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.134577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.142512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.142558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.150520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.150567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.158504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.158546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.166507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.166564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.174483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.174517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.182460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.182482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.190454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.190476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.198456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.198477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.206456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.206479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.214517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.214561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.222526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.222567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.230480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.230511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.238457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.238480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.246457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.246479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 [2024-11-20 10:04:07.254454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.456 [2024-11-20 10:04:07.254475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3888388) - No such process 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3888388 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:30.456 delay0 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.456 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:30.456 [2024-11-20 10:04:07.338982] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:37.009 Initializing NVMe Controllers 00:30:37.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.009 Initialization complete. Launching workers. 00:30:37.009 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 155 00:30:37.009 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 442, failed to submit 33 00:30:37.009 success 358, unsuccessful 84, failed 0 00:30:37.009 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.010 rmmod nvme_tcp 00:30:37.010 rmmod nvme_fabrics 00:30:37.010 rmmod nvme_keyring 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3887084 ']' 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3887084 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3887084 ']' 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3887084 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3887084 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3887084' 00:30:37.010 killing process with pid 3887084 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3887084 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3887084 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.010 10:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.546 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.546 00:30:39.546 real 0m27.821s 00:30:39.546 user 0m38.067s 00:30:39.546 sys 0m10.038s 00:30:39.546 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.546 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:39.546 ************************************ 00:30:39.546 END TEST nvmf_zcopy 00:30:39.546 ************************************ 00:30:39.546 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:39.546 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:39.546 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.546 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.546 ************************************ 00:30:39.546 START TEST nvmf_nmic 00:30:39.546 ************************************ 00:30:39.546 10:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:39.546 * Looking for test storage... 00:30:39.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:39.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.546 --rc genhtml_branch_coverage=1 00:30:39.546 --rc genhtml_function_coverage=1 00:30:39.546 --rc genhtml_legend=1 00:30:39.546 --rc geninfo_all_blocks=1 00:30:39.546 --rc geninfo_unexecuted_blocks=1 00:30:39.546 00:30:39.546 ' 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:39.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.546 --rc genhtml_branch_coverage=1 00:30:39.546 --rc genhtml_function_coverage=1 00:30:39.546 --rc genhtml_legend=1 00:30:39.546 --rc geninfo_all_blocks=1 00:30:39.546 --rc geninfo_unexecuted_blocks=1 00:30:39.546 00:30:39.546 ' 00:30:39.546 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:39.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.546 --rc genhtml_branch_coverage=1 00:30:39.546 --rc genhtml_function_coverage=1 00:30:39.546 --rc genhtml_legend=1 00:30:39.546 --rc geninfo_all_blocks=1 00:30:39.547 --rc geninfo_unexecuted_blocks=1 00:30:39.547 00:30:39.547 ' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.547 --rc genhtml_branch_coverage=1 00:30:39.547 --rc genhtml_function_coverage=1 00:30:39.547 --rc genhtml_legend=1 00:30:39.547 --rc geninfo_all_blocks=1 00:30:39.547 --rc geninfo_unexecuted_blocks=1 00:30:39.547 00:30:39.547 ' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.547 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:41.452 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:41.452 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:41.452 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:41.453 Found net devices under 0000:09:00.0: cvl_0_0 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:41.453 Found net devices under 0000:09:00.1: cvl_0_1 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:30:41.453 00:30:41.453 --- 10.0.0.2 ping statistics --- 00:30:41.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.453 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:30:41.453 00:30:41.453 --- 10.0.0.1 ping statistics --- 00:30:41.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.453 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.453 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3891759 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3891759 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3891759 ']' 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.711 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.711 [2024-11-20 10:04:18.431264] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:41.711 [2024-11-20 10:04:18.432343] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:30:41.711 [2024-11-20 10:04:18.432411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.711 [2024-11-20 10:04:18.502264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.711 [2024-11-20 10:04:18.558929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.711 [2024-11-20 10:04:18.558982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.712 [2024-11-20 10:04:18.559009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.712 [2024-11-20 10:04:18.559020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.712 [2024-11-20 10:04:18.559029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.712 [2024-11-20 10:04:18.560660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.712 [2024-11-20 10:04:18.560721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.712 [2024-11-20 10:04:18.560833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.712 [2024-11-20 10:04:18.560836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.970 [2024-11-20 10:04:18.645967] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:41.970 [2024-11-20 10:04:18.646161] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:41.970 [2024-11-20 10:04:18.646489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:41.970 [2024-11-20 10:04:18.647144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:41.970 [2024-11-20 10:04:18.647399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.970 [2024-11-20 10:04:18.701507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.970 Malloc0 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.970 [2024-11-20 10:04:18.769691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.970 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:41.971 test case1: single bdev can't be used in multiple subsystems 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 [2024-11-20 10:04:18.793436] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:41.971 [2024-11-20 10:04:18.793467] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:41.971 [2024-11-20 10:04:18.793482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.971 request: 00:30:41.971 { 00:30:41.971 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:41.971 "namespace": { 00:30:41.971 "bdev_name": "Malloc0", 00:30:41.971 "no_auto_visible": false 00:30:41.971 }, 00:30:41.971 "method": "nvmf_subsystem_add_ns", 00:30:41.971 "req_id": 1 00:30:41.971 } 00:30:41.971 Got JSON-RPC error response 00:30:41.971 response: 00:30:41.971 { 00:30:41.971 "code": -32602, 00:30:41.971 "message": "Invalid parameters" 00:30:41.971 } 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:41.971 Adding namespace failed - expected result. 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:41.971 test case2: host connect to nvmf target in multiple paths 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:41.971 [2024-11-20 10:04:18.801560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.971 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:42.229 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:42.499 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:42.499 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:42.499 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:42.499 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:42.499 10:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:44.398 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:44.398 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:44.398 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:44.398 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:44.398 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:44.398 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:44.398 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:44.398 [global] 00:30:44.398 thread=1 00:30:44.398 invalidate=1 00:30:44.398 rw=write 00:30:44.398 time_based=1 00:30:44.398 runtime=1 00:30:44.398 ioengine=libaio 00:30:44.398 direct=1 00:30:44.398 bs=4096 00:30:44.398 iodepth=1 00:30:44.398 norandommap=0 00:30:44.398 numjobs=1 00:30:44.398 00:30:44.398 verify_dump=1 00:30:44.398 verify_backlog=512 00:30:44.398 verify_state_save=0 00:30:44.398 do_verify=1 00:30:44.398 verify=crc32c-intel 00:30:44.656 [job0] 00:30:44.656 filename=/dev/nvme0n1 00:30:44.656 Could not set queue depth (nvme0n1) 00:30:44.656 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:44.656 fio-3.35 00:30:44.656 Starting 1 thread 00:30:46.029 00:30:46.029 job0: (groupid=0, jobs=1): err= 0: pid=3892220: Wed Nov 20 10:04:22 2024 00:30:46.029 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:46.029 slat (nsec): min=4428, max=36516, avg=11654.34, stdev=5704.12 00:30:46.029 clat (usec): min=199, max=618, avg=240.62, stdev=52.80 00:30:46.029 lat (usec): min=205, max=639, avg=252.27, stdev=55.58 00:30:46.029 clat percentiles (usec): 00:30:46.029 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 217], 00:30:46.029 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 235], 00:30:46.029 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 285], 00:30:46.029 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 611], 99.95th=[ 611], 00:30:46.029 | 99.99th=[ 619] 00:30:46.029 write: IOPS=2216, BW=8867KiB/s (9080kB/s)(8876KiB/1001msec); 0 zone resets 00:30:46.029 slat (usec): min=6, max=28597, avg=26.57, stdev=606.84 00:30:46.029 clat (usec): min=139, max=389, avg=184.30, stdev=36.60 00:30:46.029 lat (usec): min=146, max=28773, avg=210.87, stdev=607.70 00:30:46.029 clat percentiles (usec): 00:30:46.029 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:30:46.029 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 190], 00:30:46.029 | 70.00th=[ 194], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 247], 00:30:46.029 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 262], 99.95th=[ 265], 00:30:46.029 | 99.99th=[ 388] 00:30:46.029 bw ( KiB/s): min= 8928, max= 8928, per=100.00%, avg=8928.00, stdev= 0.00, samples=1 00:30:46.029 iops : min= 2232, max= 2232, avg=2232.00, stdev= 0.00, samples=1 00:30:46.029 lat (usec) : 250=91.94%, 500=7.41%, 750=0.66% 00:30:46.029 cpu : usr=3.60%, sys=6.20%, ctx=4270, majf=0, minf=1 00:30:46.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.030 issued rwts: total=2048,2219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:46.030 00:30:46.030 Run status group 0 (all jobs): 00:30:46.030 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:30:46.030 WRITE: bw=8867KiB/s (9080kB/s), 8867KiB/s-8867KiB/s (9080kB/s-9080kB/s), io=8876KiB (9089kB), run=1001-1001msec 00:30:46.030 00:30:46.030 Disk stats (read/write): 00:30:46.030 nvme0n1: ios=1800/2048, merge=0/0, ticks=1392/356, in_queue=1748, util=98.60% 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:46.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:46.030 rmmod nvme_tcp 00:30:46.030 rmmod nvme_fabrics 00:30:46.030 rmmod nvme_keyring 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3891759 ']' 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3891759 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3891759 ']' 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3891759 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:46.030 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3891759 00:30:46.288 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:46.288 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:46.288 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3891759' 00:30:46.288 killing process with pid 3891759 00:30:46.288 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3891759 00:30:46.288 10:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3891759 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.288 10:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.823 00:30:48.823 real 0m9.269s 00:30:48.823 user 0m17.277s 00:30:48.823 sys 0m3.492s 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:48.823 ************************************ 00:30:48.823 END TEST nvmf_nmic 00:30:48.823 ************************************ 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:48.823 ************************************ 00:30:48.823 START TEST nvmf_fio_target 00:30:48.823 ************************************ 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:48.823 * Looking for test storage... 00:30:48.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.823 --rc genhtml_branch_coverage=1 00:30:48.823 --rc genhtml_function_coverage=1 00:30:48.823 --rc genhtml_legend=1 00:30:48.823 --rc geninfo_all_blocks=1 00:30:48.823 --rc geninfo_unexecuted_blocks=1 00:30:48.823 00:30:48.823 ' 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.823 --rc genhtml_branch_coverage=1 00:30:48.823 --rc genhtml_function_coverage=1 00:30:48.823 --rc genhtml_legend=1 00:30:48.823 --rc geninfo_all_blocks=1 00:30:48.823 --rc geninfo_unexecuted_blocks=1 00:30:48.823 00:30:48.823 ' 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.823 --rc genhtml_branch_coverage=1 00:30:48.823 --rc genhtml_function_coverage=1 00:30:48.823 --rc genhtml_legend=1 00:30:48.823 --rc geninfo_all_blocks=1 00:30:48.823 --rc geninfo_unexecuted_blocks=1 00:30:48.823 00:30:48.823 ' 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.823 --rc genhtml_branch_coverage=1 00:30:48.823 --rc genhtml_function_coverage=1 00:30:48.823 --rc genhtml_legend=1 00:30:48.823 --rc geninfo_all_blocks=1 00:30:48.823 --rc geninfo_unexecuted_blocks=1 00:30:48.823 00:30:48.823 ' 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.823 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.824 10:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:50.728 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:50.728 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:50.728 Found net devices under 0000:09:00.0: cvl_0_0 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:50.728 Found net devices under 0000:09:00.1: cvl_0_1 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.728 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.729 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.986 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.986 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.986 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.986 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.986 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.986 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:30:50.987 00:30:50.987 --- 10.0.0.2 ping statistics --- 00:30:50.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.987 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:30:50.987 00:30:50.987 --- 10.0.0.1 ping statistics --- 00:30:50.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.987 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3894341 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3894341 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3894341 ']' 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.987 10:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.987 [2024-11-20 10:04:27.807776] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.987 [2024-11-20 10:04:27.808850] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:30:50.987 [2024-11-20 10:04:27.808914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.987 [2024-11-20 10:04:27.881057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.246 [2024-11-20 10:04:27.942119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.246 [2024-11-20 10:04:27.942178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.246 [2024-11-20 10:04:27.942213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.246 [2024-11-20 10:04:27.942225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.246 [2024-11-20 10:04:27.942234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.246 [2024-11-20 10:04:27.943818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.246 [2024-11-20 10:04:27.943865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.246 [2024-11-20 10:04:27.943914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.246 [2024-11-20 10:04:27.943917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.246 [2024-11-20 10:04:28.032553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:51.246 [2024-11-20 10:04:28.032769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:51.246 [2024-11-20 10:04:28.033092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:51.246 [2024-11-20 10:04:28.033771] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:51.246 [2024-11-20 10:04:28.033997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:51.246 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.246 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:51.246 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.246 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.246 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.246 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.246 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:51.504 [2024-11-20 10:04:28.388630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.762 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:52.020 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:52.020 10:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:52.278 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:52.278 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:52.538 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:52.538 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:52.796 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:52.796 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:53.362 10:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:53.620 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:53.620 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:53.878 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:53.878 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:54.136 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:54.136 10:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:54.394 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:54.651 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:54.651 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:54.909 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:54.910 10:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:55.168 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.425 [2024-11-20 10:04:32.328748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.683 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:55.941 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:56.199 10:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:56.199 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:56.199 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:56.199 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:56.199 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:56.199 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:56.199 10:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:58.784 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:58.785 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:58.785 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:58.785 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:58.785 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:58.785 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:58.785 10:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:58.785 [global] 00:30:58.785 thread=1 00:30:58.785 invalidate=1 00:30:58.785 rw=write 00:30:58.785 time_based=1 00:30:58.785 runtime=1 00:30:58.785 ioengine=libaio 00:30:58.785 direct=1 00:30:58.785 bs=4096 00:30:58.785 iodepth=1 00:30:58.785 norandommap=0 00:30:58.785 numjobs=1 00:30:58.785 00:30:58.785 verify_dump=1 00:30:58.785 verify_backlog=512 00:30:58.785 verify_state_save=0 00:30:58.785 do_verify=1 00:30:58.785 verify=crc32c-intel 00:30:58.785 [job0] 00:30:58.785 filename=/dev/nvme0n1 00:30:58.785 [job1] 00:30:58.785 filename=/dev/nvme0n2 00:30:58.785 [job2] 00:30:58.785 filename=/dev/nvme0n3 00:30:58.785 [job3] 00:30:58.785 filename=/dev/nvme0n4 00:30:58.785 Could not set queue depth (nvme0n1) 00:30:58.785 Could not set queue depth (nvme0n2) 00:30:58.785 Could not set queue depth (nvme0n3) 00:30:58.785 Could not set queue depth (nvme0n4) 00:30:58.785 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.785 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.785 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.785 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.785 fio-3.35 00:30:58.785 Starting 4 threads 00:30:59.718 00:30:59.718 job0: (groupid=0, jobs=1): err= 0: pid=3895412: Wed Nov 20 10:04:36 2024 00:30:59.718 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:30:59.718 slat (nsec): min=4550, max=54056, avg=11193.78, stdev=5868.63 00:30:59.718 clat (usec): min=230, max=41249, avg=715.74, stdev=4210.54 00:30:59.718 lat (usec): min=236, max=41265, avg=726.94, stdev=4210.89 00:30:59.718 clat percentiles (usec): 00:30:59.718 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:30:59.718 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 265], 00:30:59.718 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 334], 00:30:59.718 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:59.718 | 99.99th=[41157] 00:30:59.718 write: IOPS=1068, BW=4276KiB/s (4378kB/s)(4280KiB/1001msec); 0 zone resets 00:30:59.718 slat (nsec): min=5850, max=59579, avg=15016.09, stdev=9331.08 00:30:59.718 clat (usec): min=157, max=2755, avg=217.03, stdev=100.12 00:30:59.718 lat (usec): min=164, max=2777, avg=232.04, stdev=103.10 00:30:59.718 clat percentiles (usec): 00:30:59.718 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:30:59.718 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 200], 00:30:59.718 | 70.00th=[ 219], 80.00th=[ 245], 90.00th=[ 330], 95.00th=[ 363], 00:30:59.718 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 510], 99.95th=[ 2769], 00:30:59.718 | 99.99th=[ 2769] 00:30:59.718 bw ( KiB/s): min= 4096, max= 4096, per=28.91%, avg=4096.00, stdev= 0.00, samples=1 00:30:59.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:59.718 lat (usec) : 250=49.00%, 500=50.24%, 750=0.10%, 1000=0.05% 00:30:59.718 lat (msec) : 4=0.05%, 10=0.05%, 50=0.53% 00:30:59.718 cpu : usr=1.70%, sys=2.50%, ctx=2097, majf=0, minf=1 00:30:59.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.718 issued rwts: total=1024,1070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.718 job1: (groupid=0, jobs=1): err= 0: pid=3895413: Wed Nov 20 10:04:36 2024 00:30:59.718 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:30:59.718 slat (nsec): min=7506, max=33596, avg=14741.67, stdev=5542.14 00:30:59.718 clat (usec): min=22702, max=42052, avg=41008.90, stdev=4201.15 00:30:59.718 lat (usec): min=22716, max=42065, avg=41023.65, stdev=4201.17 00:30:59.718 clat percentiles (usec): 00:30:59.718 | 1.00th=[22676], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:30:59.718 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:59.718 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:59.718 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:59.718 | 99.99th=[42206] 00:30:59.718 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:30:59.718 slat (nsec): min=6101, max=57261, avg=17655.06, stdev=8182.51 00:30:59.718 clat (usec): min=169, max=4002, avg=250.86, stdev=179.67 00:30:59.718 lat (usec): min=183, max=4038, avg=268.51, stdev=181.21 00:30:59.719 clat percentiles (usec): 00:30:59.719 | 1.00th=[ 184], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:30:59.719 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 249], 00:30:59.719 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:30:59.719 | 99.00th=[ 363], 99.50th=[ 807], 99.90th=[ 4015], 99.95th=[ 4015], 00:30:59.719 | 99.99th=[ 4015] 00:30:59.719 bw ( KiB/s): min= 4096, max= 4096, per=28.91%, avg=4096.00, stdev= 0.00, samples=1 00:30:59.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:59.719 lat (usec) : 250=58.72%, 500=36.77%, 1000=0.19% 00:30:59.719 lat (msec) : 2=0.19%, 10=0.19%, 50=3.94% 00:30:59.719 cpu : usr=0.40%, sys=1.20%, ctx=535, majf=0, minf=1 00:30:59.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.719 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.719 job2: (groupid=0, jobs=1): err= 0: pid=3895414: Wed Nov 20 10:04:36 2024 00:30:59.719 read: IOPS=23, BW=93.9KiB/s (96.2kB/s)(96.0KiB/1022msec) 00:30:59.719 slat (nsec): min=13582, max=27038, avg=14951.83, stdev=2819.54 00:30:59.719 clat (usec): min=279, max=42270, avg=37781.82, stdev=11543.01 00:30:59.719 lat (usec): min=293, max=42297, avg=37796.77, stdev=11543.09 00:30:59.719 clat percentiles (usec): 00:30:59.719 | 1.00th=[ 281], 5.00th=[ 375], 10.00th=[41157], 20.00th=[41157], 00:30:59.719 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:59.719 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:30:59.719 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:59.719 | 99.99th=[42206] 00:30:59.719 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:30:59.719 slat (nsec): min=6783, max=37993, avg=14654.46, stdev=5787.95 00:30:59.719 clat (usec): min=168, max=383, avg=205.55, stdev=22.64 00:30:59.719 lat (usec): min=177, max=408, avg=220.20, stdev=22.27 00:30:59.719 clat percentiles (usec): 00:30:59.719 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:30:59.719 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:30:59.719 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:30:59.719 | 99.00th=[ 269], 99.50th=[ 302], 99.90th=[ 383], 99.95th=[ 383], 00:30:59.719 | 99.99th=[ 383] 00:30:59.719 bw ( KiB/s): min= 4096, max= 4096, per=28.91%, avg=4096.00, stdev= 0.00, samples=1 00:30:59.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:59.719 lat (usec) : 250=92.72%, 500=3.17% 00:30:59.719 lat (msec) : 50=4.10% 00:30:59.719 cpu : usr=0.49%, sys=0.59%, ctx=537, majf=0, minf=1 00:30:59.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.719 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.719 job3: (groupid=0, jobs=1): err= 0: pid=3895415: Wed Nov 20 10:04:36 2024 00:30:59.719 read: IOPS=1259, BW=5038KiB/s (5159kB/s)(5164KiB/1025msec) 00:30:59.719 slat (nsec): min=4277, max=70402, avg=10062.71, stdev=4996.48 00:30:59.719 clat (usec): min=200, max=41184, avg=549.41, stdev=3576.87 00:30:59.719 lat (usec): min=205, max=41254, avg=559.47, stdev=3577.86 00:30:59.719 clat percentiles (usec): 00:30:59.719 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:30:59.719 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:30:59.719 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 273], 00:30:59.719 | 99.00th=[ 848], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:59.719 | 99.99th=[41157] 00:30:59.719 write: IOPS=1498, BW=5994KiB/s (6138kB/s)(6144KiB/1025msec); 0 zone resets 00:30:59.719 slat (nsec): min=5794, max=56536, avg=12545.29, stdev=5388.12 00:30:59.719 clat (usec): min=148, max=3490, avg=178.42, stdev=90.46 00:30:59.719 lat (usec): min=155, max=3524, avg=190.96, stdev=91.24 00:30:59.719 clat percentiles (usec): 00:30:59.719 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:30:59.719 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 174], 00:30:59.719 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 215], 00:30:59.719 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 1090], 99.95th=[ 3490], 00:30:59.719 | 99.99th=[ 3490] 00:30:59.719 bw ( KiB/s): min= 2192, max=10096, per=43.37%, avg=6144.00, stdev=5588.97, samples=2 00:30:59.719 iops : min= 548, max= 2524, avg=1536.00, stdev=1397.24, samples=2 00:30:59.719 lat (usec) : 250=94.20%, 500=5.27%, 1000=0.04% 00:30:59.719 lat (msec) : 2=0.11%, 4=0.04%, 50=0.35% 00:30:59.719 cpu : usr=2.25%, sys=2.64%, ctx=2827, majf=0, minf=2 00:30:59.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.719 issued rwts: total=1291,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.719 00:30:59.719 Run status group 0 (all jobs): 00:30:59.719 READ: bw=9210KiB/s (9431kB/s), 83.8KiB/s-5038KiB/s (85.8kB/s-5159kB/s), io=9440KiB (9667kB), run=1001-1025msec 00:30:59.719 WRITE: bw=13.8MiB/s (14.5MB/s), 2004KiB/s-5994KiB/s (2052kB/s-6138kB/s), io=14.2MiB (14.9MB), run=1001-1025msec 00:30:59.719 00:30:59.719 Disk stats (read/write): 00:30:59.719 nvme0n1: ios=594/1024, merge=0/0, ticks=683/219, in_queue=902, util=87.37% 00:30:59.719 nvme0n2: ios=65/512, merge=0/0, ticks=787/127, in_queue=914, util=91.36% 00:30:59.719 nvme0n3: ios=82/512, merge=0/0, ticks=792/103, in_queue=895, util=95.20% 00:30:59.719 nvme0n4: ios=1329/1536, merge=0/0, ticks=572/270, in_queue=842, util=95.70% 00:30:59.719 10:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:59.719 [global] 00:30:59.719 thread=1 00:30:59.719 invalidate=1 00:30:59.719 rw=randwrite 00:30:59.719 time_based=1 00:30:59.719 runtime=1 00:30:59.719 ioengine=libaio 00:30:59.719 direct=1 00:30:59.719 bs=4096 00:30:59.719 iodepth=1 00:30:59.719 norandommap=0 00:30:59.719 numjobs=1 00:30:59.719 00:30:59.719 verify_dump=1 00:30:59.719 verify_backlog=512 00:30:59.719 verify_state_save=0 00:30:59.719 do_verify=1 00:30:59.719 verify=crc32c-intel 00:30:59.719 [job0] 00:30:59.719 filename=/dev/nvme0n1 00:30:59.719 [job1] 00:30:59.719 filename=/dev/nvme0n2 00:30:59.719 [job2] 00:30:59.719 filename=/dev/nvme0n3 00:30:59.719 [job3] 00:30:59.719 filename=/dev/nvme0n4 00:30:59.719 Could not set queue depth (nvme0n1) 00:30:59.719 Could not set queue depth (nvme0n2) 00:30:59.719 Could not set queue depth (nvme0n3) 00:30:59.719 Could not set queue depth (nvme0n4) 00:30:59.977 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:59.977 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:59.977 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:59.977 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:59.977 fio-3.35 00:30:59.977 Starting 4 threads 00:31:01.351 00:31:01.351 job0: (groupid=0, jobs=1): err= 0: pid=3895641: Wed Nov 20 10:04:37 2024 00:31:01.351 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:31:01.351 slat (nsec): min=8190, max=28289, avg=14636.27, stdev=3377.28 00:31:01.351 clat (usec): min=40909, max=41265, avg=40989.91, stdev=68.37 00:31:01.351 lat (usec): min=40923, max=41273, avg=41004.54, stdev=67.52 00:31:01.351 clat percentiles (usec): 00:31:01.351 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:01.351 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:01.351 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:01.351 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:01.351 | 99.99th=[41157] 00:31:01.351 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:31:01.351 slat (nsec): min=7560, max=49494, avg=15567.47, stdev=7249.30 00:31:01.352 clat (usec): min=154, max=321, avg=190.98, stdev=20.85 00:31:01.352 lat (usec): min=163, max=355, avg=206.55, stdev=24.72 00:31:01.352 clat percentiles (usec): 00:31:01.352 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:31:01.352 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:31:01.352 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 225], 00:31:01.352 | 99.00th=[ 260], 99.50th=[ 297], 99.90th=[ 322], 99.95th=[ 322], 00:31:01.352 | 99.99th=[ 322] 00:31:01.352 bw ( KiB/s): min= 4096, max= 4096, per=18.38%, avg=4096.00, stdev= 0.00, samples=1 00:31:01.352 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:01.352 lat (usec) : 250=94.76%, 500=1.12% 00:31:01.352 lat (msec) : 50=4.12% 00:31:01.352 cpu : usr=0.40%, sys=1.19%, ctx=536, majf=0, minf=1 00:31:01.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.352 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.352 job1: (groupid=0, jobs=1): err= 0: pid=3895642: Wed Nov 20 10:04:37 2024 00:31:01.352 read: IOPS=1742, BW=6969KiB/s (7136kB/s)(6976KiB/1001msec) 00:31:01.352 slat (nsec): min=5073, max=37871, avg=7589.68, stdev=3629.82 00:31:01.352 clat (usec): min=212, max=3847, avg=301.07, stdev=117.35 00:31:01.352 lat (usec): min=219, max=3856, avg=308.66, stdev=118.22 00:31:01.352 clat percentiles (usec): 00:31:01.352 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:31:01.352 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 289], 00:31:01.352 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 404], 95.00th=[ 461], 00:31:01.352 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 1385], 99.95th=[ 3851], 00:31:01.352 | 99.99th=[ 3851] 00:31:01.352 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:01.352 slat (nsec): min=6216, max=53351, avg=10102.86, stdev=4875.45 00:31:01.352 clat (usec): min=150, max=931, avg=210.40, stdev=43.96 00:31:01.352 lat (usec): min=158, max=940, avg=220.51, stdev=44.91 00:31:01.352 clat percentiles (usec): 00:31:01.352 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:31:01.352 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 202], 60.00th=[ 221], 00:31:01.352 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 277], 00:31:01.352 | 99.00th=[ 326], 99.50th=[ 392], 99.90th=[ 404], 99.95th=[ 457], 00:31:01.352 | 99.99th=[ 930] 00:31:01.352 bw ( KiB/s): min= 8192, max= 8192, per=36.76%, avg=8192.00, stdev= 0.00, samples=1 00:31:01.352 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:01.352 lat (usec) : 250=58.65%, 500=39.87%, 750=1.40%, 1000=0.03% 00:31:01.352 lat (msec) : 2=0.03%, 4=0.03% 00:31:01.352 cpu : usr=2.30%, sys=4.70%, ctx=3794, majf=0, minf=1 00:31:01.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.352 issued rwts: total=1744,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.352 job2: (groupid=0, jobs=1): err= 0: pid=3895643: Wed Nov 20 10:04:37 2024 00:31:01.352 read: IOPS=1018, BW=4076KiB/s (4174kB/s)(4080KiB/1001msec) 00:31:01.352 slat (nsec): min=6096, max=27469, avg=7850.45, stdev=2739.36 00:31:01.352 clat (usec): min=211, max=41094, avg=713.65, stdev=3809.15 00:31:01.352 lat (usec): min=217, max=41108, avg=721.50, stdev=3809.71 00:31:01.352 clat percentiles (usec): 00:31:01.352 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 260], 00:31:01.352 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 343], 00:31:01.352 | 70.00th=[ 383], 80.00th=[ 429], 90.00th=[ 461], 95.00th=[ 515], 00:31:01.352 | 99.00th=[ 3982], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:01.352 | 99.99th=[41157] 00:31:01.352 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:31:01.352 slat (nsec): min=6508, max=47527, avg=11957.75, stdev=6117.07 00:31:01.352 clat (usec): min=153, max=467, avg=240.84, stdev=49.14 00:31:01.352 lat (usec): min=161, max=497, avg=252.80, stdev=49.65 00:31:01.352 clat percentiles (usec): 00:31:01.352 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 212], 00:31:01.352 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:31:01.352 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 334], 00:31:01.352 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 465], 99.95th=[ 469], 00:31:01.352 | 99.99th=[ 469] 00:31:01.352 bw ( KiB/s): min= 4096, max= 4096, per=18.38%, avg=4096.00, stdev= 0.00, samples=1 00:31:01.352 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:01.352 lat (usec) : 250=39.48%, 500=57.53%, 750=2.30%, 1000=0.05% 00:31:01.352 lat (msec) : 2=0.05%, 4=0.10%, 10=0.05%, 50=0.44% 00:31:01.352 cpu : usr=1.20%, sys=2.60%, ctx=2046, majf=0, minf=1 00:31:01.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.352 issued rwts: total=1020,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.352 job3: (groupid=0, jobs=1): err= 0: pid=3895644: Wed Nov 20 10:04:37 2024 00:31:01.352 read: IOPS=1857, BW=7429KiB/s (7607kB/s)(7436KiB/1001msec) 00:31:01.352 slat (nsec): min=4315, max=67184, avg=7645.67, stdev=4431.13 00:31:01.352 clat (usec): min=187, max=580, avg=287.69, stdev=48.96 00:31:01.352 lat (usec): min=196, max=590, avg=295.34, stdev=50.38 00:31:01.352 clat percentiles (usec): 00:31:01.352 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:31:01.352 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:31:01.352 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 371], 95.00th=[ 392], 00:31:01.352 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 578], 99.95th=[ 578], 00:31:01.352 | 99.99th=[ 578] 00:31:01.352 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:01.352 slat (nsec): min=5414, max=50675, avg=8783.97, stdev=4317.02 00:31:01.352 clat (usec): min=165, max=422, avg=206.85, stdev=34.06 00:31:01.352 lat (usec): min=172, max=430, avg=215.64, stdev=35.68 00:31:01.352 clat percentiles (usec): 00:31:01.352 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:31:01.352 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 202], 00:31:01.352 | 70.00th=[ 219], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:31:01.352 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 400], 99.95th=[ 408], 00:31:01.352 | 99.99th=[ 424] 00:31:01.352 bw ( KiB/s): min= 8192, max= 8192, per=36.76%, avg=8192.00, stdev= 0.00, samples=1 00:31:01.352 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:01.352 lat (usec) : 250=53.39%, 500=46.46%, 750=0.15% 00:31:01.352 cpu : usr=2.50%, sys=3.40%, ctx=3907, majf=0, minf=1 00:31:01.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.352 issued rwts: total=1859,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.352 00:31:01.352 Run status group 0 (all jobs): 00:31:01.352 READ: bw=17.9MiB/s (18.8MB/s), 87.0KiB/s-7429KiB/s (89.1kB/s-7607kB/s), io=18.1MiB (19.0MB), run=1001-1011msec 00:31:01.352 WRITE: bw=21.8MiB/s (22.8MB/s), 2026KiB/s-8184KiB/s (2074kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1011msec 00:31:01.352 00:31:01.352 Disk stats (read/write): 00:31:01.352 nvme0n1: ios=53/512, merge=0/0, ticks=1546/97, in_queue=1643, util=97.60% 00:31:01.353 nvme0n2: ios=1583/1711, merge=0/0, ticks=1102/329, in_queue=1431, util=97.36% 00:31:01.353 nvme0n3: ios=675/1024, merge=0/0, ticks=1222/231, in_queue=1453, util=98.65% 00:31:01.353 nvme0n4: ios=1536/1762, merge=0/0, ticks=436/358, in_queue=794, util=89.62% 00:31:01.353 10:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:01.353 [global] 00:31:01.353 thread=1 00:31:01.353 invalidate=1 00:31:01.353 rw=write 00:31:01.353 time_based=1 00:31:01.353 runtime=1 00:31:01.353 ioengine=libaio 00:31:01.353 direct=1 00:31:01.353 bs=4096 00:31:01.353 iodepth=128 00:31:01.353 norandommap=0 00:31:01.353 numjobs=1 00:31:01.353 00:31:01.353 verify_dump=1 00:31:01.353 verify_backlog=512 00:31:01.353 verify_state_save=0 00:31:01.353 do_verify=1 00:31:01.353 verify=crc32c-intel 00:31:01.353 [job0] 00:31:01.353 filename=/dev/nvme0n1 00:31:01.353 [job1] 00:31:01.353 filename=/dev/nvme0n2 00:31:01.353 [job2] 00:31:01.353 filename=/dev/nvme0n3 00:31:01.353 [job3] 00:31:01.353 filename=/dev/nvme0n4 00:31:01.353 Could not set queue depth (nvme0n1) 00:31:01.353 Could not set queue depth (nvme0n2) 00:31:01.353 Could not set queue depth (nvme0n3) 00:31:01.353 Could not set queue depth (nvme0n4) 00:31:01.353 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:01.353 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:01.353 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:01.353 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:01.353 fio-3.35 00:31:01.353 Starting 4 threads 00:31:02.728 00:31:02.728 job0: (groupid=0, jobs=1): err= 0: pid=3895886: Wed Nov 20 10:04:39 2024 00:31:02.728 read: IOPS=4300, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1009msec) 00:31:02.728 slat (usec): min=2, max=31284, avg=119.78, stdev=1044.75 00:31:02.728 clat (usec): min=2560, max=54678, avg=15952.68, stdev=7982.93 00:31:02.728 lat (usec): min=3015, max=54694, avg=16072.46, stdev=8057.75 00:31:02.728 clat percentiles (usec): 00:31:02.728 | 1.00th=[ 5800], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[10814], 00:31:02.728 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12649], 60.00th=[14222], 00:31:02.728 | 70.00th=[16057], 80.00th=[19792], 90.00th=[30802], 95.00th=[33162], 00:31:02.728 | 99.00th=[39584], 99.50th=[39584], 99.90th=[53216], 99.95th=[53216], 00:31:02.728 | 99.99th=[54789] 00:31:02.728 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:31:02.728 slat (usec): min=3, max=18657, avg=91.68, stdev=749.84 00:31:02.728 clat (usec): min=970, max=32931, avg=12679.47, stdev=4171.88 00:31:02.728 lat (usec): min=985, max=38927, avg=12771.15, stdev=4245.04 00:31:02.728 clat percentiles (usec): 00:31:02.728 | 1.00th=[ 5735], 5.00th=[ 5932], 10.00th=[ 7504], 20.00th=[ 9634], 00:31:02.728 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:31:02.728 | 70.00th=[13829], 80.00th=[16319], 90.00th=[19530], 95.00th=[20317], 00:31:02.728 | 99.00th=[22676], 99.50th=[26084], 99.90th=[28443], 99.95th=[29754], 00:31:02.728 | 99.99th=[32900] 00:31:02.728 bw ( KiB/s): min=16384, max=20480, per=26.61%, avg=18432.00, stdev=2896.31, samples=2 00:31:02.728 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:31:02.728 lat (usec) : 1000=0.03% 00:31:02.728 lat (msec) : 4=0.18%, 10=15.16%, 20=70.35%, 50=14.17%, 100=0.11% 00:31:02.728 cpu : usr=3.97%, sys=5.95%, ctx=286, majf=0, minf=2 00:31:02.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:02.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:02.728 issued rwts: total=4339,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:02.728 job1: (groupid=0, jobs=1): err= 0: pid=3895887: Wed Nov 20 10:04:39 2024 00:31:02.728 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:31:02.728 slat (usec): min=2, max=17523, avg=112.58, stdev=950.56 00:31:02.728 clat (usec): min=4459, max=39032, avg=15019.13, stdev=5419.60 00:31:02.728 lat (usec): min=4466, max=39043, avg=15131.70, stdev=5485.64 00:31:02.728 clat percentiles (usec): 00:31:02.728 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[10814], 20.00th=[11469], 00:31:02.728 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13042], 60.00th=[13829], 00:31:02.728 | 70.00th=[16188], 80.00th=[17695], 90.00th=[21890], 95.00th=[26084], 00:31:02.728 | 99.00th=[37487], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:31:02.728 | 99.99th=[39060] 00:31:02.728 write: IOPS=4181, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1008msec); 0 zone resets 00:31:02.728 slat (usec): min=4, max=20642, avg=117.92, stdev=824.89 00:31:02.728 clat (usec): min=2036, max=44273, avg=15714.63, stdev=7487.35 00:31:02.728 lat (usec): min=2893, max=44302, avg=15832.54, stdev=7550.36 00:31:02.728 clat percentiles (usec): 00:31:02.728 | 1.00th=[ 5080], 5.00th=[ 8094], 10.00th=[ 9503], 20.00th=[11338], 00:31:02.728 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13173], 60.00th=[15139], 00:31:02.728 | 70.00th=[15926], 80.00th=[18220], 90.00th=[25560], 95.00th=[34866], 00:31:02.728 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:02.728 | 99.99th=[44303] 00:31:02.728 bw ( KiB/s): min=16384, max=16432, per=23.69%, avg=16408.00, stdev=33.94, samples=2 00:31:02.728 iops : min= 4096, max= 4108, avg=4102.00, stdev= 8.49, samples=2 00:31:02.728 lat (msec) : 4=0.25%, 10=7.07%, 20=77.85%, 50=14.82% 00:31:02.728 cpu : usr=6.06%, sys=8.94%, ctx=307, majf=0, minf=1 00:31:02.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:02.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:02.728 issued rwts: total=4096,4215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:02.728 job2: (groupid=0, jobs=1): err= 0: pid=3895888: Wed Nov 20 10:04:39 2024 00:31:02.728 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:31:02.728 slat (usec): min=2, max=15077, avg=109.05, stdev=791.85 00:31:02.728 clat (usec): min=4508, max=52775, avg=14653.60, stdev=5404.86 00:31:02.728 lat (usec): min=4523, max=52781, avg=14762.65, stdev=5440.12 00:31:02.728 clat percentiles (usec): 00:31:02.728 | 1.00th=[ 6194], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11338], 00:31:02.728 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13435], 60.00th=[14222], 00:31:02.728 | 70.00th=[15270], 80.00th=[16319], 90.00th=[19268], 95.00th=[24773], 00:31:02.728 | 99.00th=[34341], 99.50th=[35914], 99.90th=[52691], 99.95th=[52691], 00:31:02.728 | 99.99th=[52691] 00:31:02.728 write: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1004msec); 0 zone resets 00:31:02.728 slat (usec): min=3, max=29362, avg=98.03, stdev=887.08 00:31:02.728 clat (usec): min=2438, max=71604, avg=14364.46, stdev=7217.55 00:31:02.728 lat (usec): min=2448, max=71613, avg=14462.49, stdev=7261.31 00:31:02.728 clat percentiles (usec): 00:31:02.728 | 1.00th=[ 5276], 5.00th=[ 7767], 10.00th=[ 8848], 20.00th=[10814], 00:31:02.728 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13304], 60.00th=[14091], 00:31:02.728 | 70.00th=[14746], 80.00th=[15533], 90.00th=[18482], 95.00th=[22676], 00:31:02.728 | 99.00th=[49021], 99.50th=[63177], 99.90th=[69731], 99.95th=[70779], 00:31:02.728 | 99.99th=[71828] 00:31:02.728 bw ( KiB/s): min=15864, max=19552, per=25.56%, avg=17708.00, stdev=2607.81, samples=2 00:31:02.728 iops : min= 3966, max= 4888, avg=4427.00, stdev=651.95, samples=2 00:31:02.728 lat (msec) : 4=0.18%, 10=13.65%, 20=77.70%, 50=7.94%, 100=0.52% 00:31:02.728 cpu : usr=5.48%, sys=5.88%, ctx=288, majf=0, minf=1 00:31:02.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:02.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:02.728 issued rwts: total=4096,4554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:02.728 job3: (groupid=0, jobs=1): err= 0: pid=3895889: Wed Nov 20 10:04:39 2024 00:31:02.728 read: IOPS=3982, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1004msec) 00:31:02.728 slat (usec): min=2, max=26912, avg=125.95, stdev=843.65 00:31:02.728 clat (usec): min=481, max=70181, avg=15820.12, stdev=7497.24 00:31:02.728 lat (usec): min=3952, max=70201, avg=15946.07, stdev=7552.86 00:31:02.728 clat percentiles (usec): 00:31:02.728 | 1.00th=[ 7701], 5.00th=[11076], 10.00th=[11994], 20.00th=[12911], 00:31:02.728 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:31:02.728 | 70.00th=[14746], 80.00th=[15401], 90.00th=[17695], 95.00th=[32375], 00:31:02.728 | 99.00th=[50594], 99.50th=[61604], 99.90th=[69731], 99.95th=[69731], 00:31:02.728 | 99.99th=[69731] 00:31:02.728 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:31:02.728 slat (usec): min=3, max=23546, avg=111.74, stdev=670.83 00:31:02.728 clat (usec): min=4045, max=61486, avg=15657.52, stdev=7407.36 00:31:02.728 lat (usec): min=4063, max=61496, avg=15769.27, stdev=7440.31 00:31:02.728 clat percentiles (usec): 00:31:02.728 | 1.00th=[ 4621], 5.00th=[10814], 10.00th=[11731], 20.00th=[12387], 00:31:02.728 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[14353], 00:31:02.728 | 70.00th=[14877], 80.00th=[15401], 90.00th=[21627], 95.00th=[30802], 00:31:02.728 | 99.00th=[53216], 99.50th=[53216], 99.90th=[61604], 99.95th=[61604], 00:31:02.728 | 99.99th=[61604] 00:31:02.728 bw ( KiB/s): min=16296, max=16472, per=23.65%, avg=16384.00, stdev=124.45, samples=2 00:31:02.728 iops : min= 4074, max= 4118, avg=4096.00, stdev=31.11, samples=2 00:31:02.728 lat (usec) : 500=0.01% 00:31:02.728 lat (msec) : 4=0.09%, 10=2.57%, 20=87.40%, 50=8.83%, 100=1.10% 00:31:02.728 cpu : usr=4.89%, sys=8.77%, ctx=469, majf=0, minf=2 00:31:02.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:02.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:02.728 issued rwts: total=3998,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:02.728 00:31:02.728 Run status group 0 (all jobs): 00:31:02.729 READ: bw=64.0MiB/s (67.1MB/s), 15.6MiB/s-16.8MiB/s (16.3MB/s-17.6MB/s), io=64.6MiB (67.7MB), run=1004-1009msec 00:31:02.729 WRITE: bw=67.6MiB/s (70.9MB/s), 15.9MiB/s-17.8MiB/s (16.7MB/s-18.7MB/s), io=68.3MiB (71.6MB), run=1004-1009msec 00:31:02.729 00:31:02.729 Disk stats (read/write): 00:31:02.729 nvme0n1: ios=3606/3772, merge=0/0, ticks=46551/30977, in_queue=77528, util=98.10% 00:31:02.729 nvme0n2: ios=3130/3584, merge=0/0, ticks=48141/56729, in_queue=104870, util=98.27% 00:31:02.729 nvme0n3: ios=3643/3979, merge=0/0, ticks=34037/32310, in_queue=66347, util=98.12% 00:31:02.729 nvme0n4: ios=3152/3584, merge=0/0, ticks=20048/18030, in_queue=38078, util=88.98% 00:31:02.729 10:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:02.729 [global] 00:31:02.729 thread=1 00:31:02.729 invalidate=1 00:31:02.729 rw=randwrite 00:31:02.729 time_based=1 00:31:02.729 runtime=1 00:31:02.729 ioengine=libaio 00:31:02.729 direct=1 00:31:02.729 bs=4096 00:31:02.729 iodepth=128 00:31:02.729 norandommap=0 00:31:02.729 numjobs=1 00:31:02.729 00:31:02.729 verify_dump=1 00:31:02.729 verify_backlog=512 00:31:02.729 verify_state_save=0 00:31:02.729 do_verify=1 00:31:02.729 verify=crc32c-intel 00:31:02.729 [job0] 00:31:02.729 filename=/dev/nvme0n1 00:31:02.729 [job1] 00:31:02.729 filename=/dev/nvme0n2 00:31:02.729 [job2] 00:31:02.729 filename=/dev/nvme0n3 00:31:02.729 [job3] 00:31:02.729 filename=/dev/nvme0n4 00:31:02.729 Could not set queue depth (nvme0n1) 00:31:02.729 Could not set queue depth (nvme0n2) 00:31:02.729 Could not set queue depth (nvme0n3) 00:31:02.729 Could not set queue depth (nvme0n4) 00:31:02.987 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:02.987 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:02.987 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:02.987 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:02.987 fio-3.35 00:31:02.987 Starting 4 threads 00:31:04.362 00:31:04.362 job0: (groupid=0, jobs=1): err= 0: pid=3896122: Wed Nov 20 10:04:40 2024 00:31:04.362 read: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1007msec) 00:31:04.362 slat (usec): min=2, max=17505, avg=93.96, stdev=815.80 00:31:04.362 clat (usec): min=1236, max=32513, avg=11971.24, stdev=4098.11 00:31:04.362 lat (usec): min=1439, max=32529, avg=12065.21, stdev=4151.89 00:31:04.362 clat percentiles (usec): 00:31:04.362 | 1.00th=[ 1614], 5.00th=[ 6456], 10.00th=[ 8717], 20.00th=[ 9634], 00:31:04.362 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[11731], 00:31:04.362 | 70.00th=[13435], 80.00th=[15270], 90.00th=[17695], 95.00th=[19792], 00:31:04.362 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:31:04.362 | 99.99th=[32637] 00:31:04.362 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:31:04.362 slat (usec): min=2, max=13530, avg=79.83, stdev=650.40 00:31:04.362 clat (usec): min=2118, max=26014, avg=11210.10, stdev=2995.75 00:31:04.362 lat (usec): min=2124, max=26026, avg=11289.93, stdev=3052.83 00:31:04.362 clat percentiles (usec): 00:31:04.362 | 1.00th=[ 3589], 5.00th=[ 6521], 10.00th=[ 7504], 20.00th=[ 8979], 00:31:04.362 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:31:04.362 | 70.00th=[12780], 80.00th=[13698], 90.00th=[14615], 95.00th=[15401], 00:31:04.362 | 99.00th=[20055], 99.50th=[21365], 99.90th=[21890], 99.95th=[24511], 00:31:04.362 | 99.99th=[26084] 00:31:04.362 bw ( KiB/s): min=21920, max=23136, per=34.26%, avg=22528.00, stdev=859.84, samples=2 00:31:04.362 iops : min= 5480, max= 5784, avg=5632.00, stdev=214.96, samples=2 00:31:04.362 lat (msec) : 2=0.77%, 4=1.26%, 10=32.88%, 20=62.91%, 50=2.17% 00:31:04.362 cpu : usr=4.27%, sys=6.06%, ctx=359, majf=0, minf=1 00:31:04.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:04.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.362 issued rwts: total=5364,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.362 job1: (groupid=0, jobs=1): err= 0: pid=3896123: Wed Nov 20 10:04:40 2024 00:31:04.362 read: IOPS=3589, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1009msec) 00:31:04.362 slat (usec): min=2, max=13524, avg=108.66, stdev=743.74 00:31:04.362 clat (usec): min=2568, max=48909, avg=14804.22, stdev=9239.00 00:31:04.362 lat (usec): min=2596, max=54828, avg=14912.88, stdev=9311.89 00:31:04.362 clat percentiles (usec): 00:31:04.362 | 1.00th=[ 4359], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 8979], 00:31:04.362 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:31:04.362 | 70.00th=[12911], 80.00th=[23462], 90.00th=[30278], 95.00th=[35390], 00:31:04.362 | 99.00th=[44303], 99.50th=[45876], 99.90th=[49021], 99.95th=[49021], 00:31:04.362 | 99.99th=[49021] 00:31:04.362 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:31:04.363 slat (usec): min=3, max=8643, avg=129.83, stdev=682.48 00:31:04.363 clat (usec): min=884, max=62999, avg=18081.97, stdev=15099.26 00:31:04.363 lat (usec): min=906, max=63007, avg=18211.80, stdev=15213.56 00:31:04.363 clat percentiles (usec): 00:31:04.363 | 1.00th=[ 4555], 5.00th=[ 7177], 10.00th=[ 8455], 20.00th=[10290], 00:31:04.363 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[12387], 00:31:04.363 | 70.00th=[14615], 80.00th=[21365], 90.00th=[53216], 95.00th=[58459], 00:31:04.363 | 99.00th=[59507], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:31:04.363 | 99.99th=[63177] 00:31:04.363 bw ( KiB/s): min=15632, max=16416, per=24.37%, avg=16024.00, stdev=554.37, samples=2 00:31:04.363 iops : min= 3908, max= 4104, avg=4006.00, stdev=138.59, samples=2 00:31:04.363 lat (usec) : 1000=0.10% 00:31:04.363 lat (msec) : 2=0.13%, 4=0.61%, 10=24.20%, 20=51.74%, 50=17.54% 00:31:04.363 lat (msec) : 100=5.68% 00:31:04.363 cpu : usr=4.17%, sys=6.25%, ctx=457, majf=0, minf=2 00:31:04.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:04.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.363 issued rwts: total=3622,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.363 job2: (groupid=0, jobs=1): err= 0: pid=3896124: Wed Nov 20 10:04:40 2024 00:31:04.363 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:31:04.363 slat (usec): min=2, max=14885, avg=149.35, stdev=927.13 00:31:04.363 clat (usec): min=9109, max=56285, avg=19603.02, stdev=10065.52 00:31:04.363 lat (usec): min=9120, max=56299, avg=19752.37, stdev=10147.83 00:31:04.363 clat percentiles (usec): 00:31:04.363 | 1.00th=[10421], 5.00th=[11863], 10.00th=[12780], 20.00th=[13042], 00:31:04.363 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:31:04.363 | 70.00th=[25822], 80.00th=[31065], 90.00th=[34341], 95.00th=[40109], 00:31:04.363 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[54264], 00:31:04.363 | 99.99th=[56361] 00:31:04.363 write: IOPS=3462, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1008msec); 0 zone resets 00:31:04.363 slat (usec): min=4, max=22945, avg=146.62, stdev=910.62 00:31:04.363 clat (usec): min=6826, max=65794, avg=19258.91, stdev=9990.30 00:31:04.363 lat (usec): min=7765, max=65811, avg=19405.53, stdev=10075.19 00:31:04.363 clat percentiles (usec): 00:31:04.363 | 1.00th=[10159], 5.00th=[11338], 10.00th=[12780], 20.00th=[13173], 00:31:04.363 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14222], 60.00th=[16581], 00:31:04.363 | 70.00th=[18744], 80.00th=[24249], 90.00th=[36963], 95.00th=[42730], 00:31:04.363 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52691], 99.95th=[57934], 00:31:04.363 | 99.99th=[65799] 00:31:04.363 bw ( KiB/s): min=10520, max=16384, per=20.46%, avg=13452.00, stdev=4146.47, samples=2 00:31:04.363 iops : min= 2630, max= 4096, avg=3363.00, stdev=1036.62, samples=2 00:31:04.363 lat (msec) : 10=0.64%, 20=70.41%, 50=28.13%, 100=0.82% 00:31:04.363 cpu : usr=2.98%, sys=6.45%, ctx=382, majf=0, minf=1 00:31:04.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:04.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.363 issued rwts: total=3072,3490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.363 job3: (groupid=0, jobs=1): err= 0: pid=3896125: Wed Nov 20 10:04:40 2024 00:31:04.363 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:31:04.363 slat (usec): min=2, max=26393, avg=156.30, stdev=1133.39 00:31:04.363 clat (usec): min=9700, max=77355, avg=19764.98, stdev=9027.52 00:31:04.363 lat (usec): min=9715, max=77364, avg=19921.28, stdev=9125.12 00:31:04.363 clat percentiles (usec): 00:31:04.363 | 1.00th=[ 9896], 5.00th=[12780], 10.00th=[14222], 20.00th=[14615], 00:31:04.363 | 30.00th=[15008], 40.00th=[16188], 50.00th=[17695], 60.00th=[19006], 00:31:04.363 | 70.00th=[20579], 80.00th=[22414], 90.00th=[26608], 95.00th=[30802], 00:31:04.363 | 99.00th=[74974], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:31:04.363 | 99.99th=[77071] 00:31:04.363 write: IOPS=3349, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1010msec); 0 zone resets 00:31:04.363 slat (usec): min=3, max=15712, avg=142.12, stdev=919.50 00:31:04.363 clat (usec): min=7989, max=62383, avg=19736.78, stdev=9554.09 00:31:04.363 lat (usec): min=9372, max=62429, avg=19878.91, stdev=9604.17 00:31:04.363 clat percentiles (usec): 00:31:04.363 | 1.00th=[10945], 5.00th=[13829], 10.00th=[14091], 20.00th=[14484], 00:31:04.363 | 30.00th=[14877], 40.00th=[15401], 50.00th=[15795], 60.00th=[17171], 00:31:04.363 | 70.00th=[19268], 80.00th=[20317], 90.00th=[32375], 95.00th=[43254], 00:31:04.363 | 99.00th=[61604], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:31:04.363 | 99.99th=[62129] 00:31:04.363 bw ( KiB/s): min=12288, max=13760, per=19.81%, avg=13024.00, stdev=1040.86, samples=2 00:31:04.363 iops : min= 3072, max= 3440, avg=3256.00, stdev=260.22, samples=2 00:31:04.363 lat (msec) : 10=0.71%, 20=72.56%, 50=23.80%, 100=2.93% 00:31:04.363 cpu : usr=4.96%, sys=8.03%, ctx=211, majf=0, minf=1 00:31:04.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:04.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.363 issued rwts: total=3072,3383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.363 00:31:04.363 Run status group 0 (all jobs): 00:31:04.363 READ: bw=58.5MiB/s (61.4MB/s), 11.9MiB/s-20.8MiB/s (12.5MB/s-21.8MB/s), io=59.1MiB (62.0MB), run=1007-1010msec 00:31:04.363 WRITE: bw=64.2MiB/s (67.3MB/s), 13.1MiB/s-21.8MiB/s (13.7MB/s-22.9MB/s), io=64.8MiB (68.0MB), run=1007-1010msec 00:31:04.363 00:31:04.363 Disk stats (read/write): 00:31:04.363 nvme0n1: ios=4628/4695, merge=0/0, ticks=50679/45239, in_queue=95918, util=90.58% 00:31:04.363 nvme0n2: ios=2936/3072, merge=0/0, ticks=18948/24545, in_queue=43493, util=96.04% 00:31:04.363 nvme0n3: ios=2881/3072, merge=0/0, ticks=18856/17413, in_queue=36269, util=98.12% 00:31:04.363 nvme0n4: ios=2598/2797, merge=0/0, ticks=24567/20308, in_queue=44875, util=95.91% 00:31:04.363 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:04.363 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3896262 00:31:04.363 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:04.363 10:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:04.363 [global] 00:31:04.363 thread=1 00:31:04.363 invalidate=1 00:31:04.363 rw=read 00:31:04.363 time_based=1 00:31:04.363 runtime=10 00:31:04.363 ioengine=libaio 00:31:04.363 direct=1 00:31:04.363 bs=4096 00:31:04.363 iodepth=1 00:31:04.363 norandommap=1 00:31:04.363 numjobs=1 00:31:04.363 00:31:04.363 [job0] 00:31:04.363 filename=/dev/nvme0n1 00:31:04.363 [job1] 00:31:04.363 filename=/dev/nvme0n2 00:31:04.363 [job2] 00:31:04.363 filename=/dev/nvme0n3 00:31:04.363 [job3] 00:31:04.363 filename=/dev/nvme0n4 00:31:04.363 Could not set queue depth (nvme0n1) 00:31:04.363 Could not set queue depth (nvme0n2) 00:31:04.363 Could not set queue depth (nvme0n3) 00:31:04.363 Could not set queue depth (nvme0n4) 00:31:04.363 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:04.363 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:04.363 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:04.363 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:04.364 fio-3.35 00:31:04.364 Starting 4 threads 00:31:07.647 10:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:07.647 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:07.647 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4079616, buflen=4096 00:31:07.647 fio: pid=3896468, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:07.647 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:07.647 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:07.647 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46620672, buflen=4096 00:31:07.647 fio: pid=3896467, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:07.905 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:07.905 10:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:07.905 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=25116672, buflen=4096 00:31:07.905 fio: pid=3896465, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:08.163 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:08.163 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:08.422 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=36864000, buflen=4096 00:31:08.422 fio: pid=3896466, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:08.422 00:31:08.422 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3896465: Wed Nov 20 10:04:45 2024 00:31:08.422 read: IOPS=1723, BW=6894KiB/s (7059kB/s)(24.0MiB/3558msec) 00:31:08.422 slat (usec): min=5, max=10735, avg=12.62, stdev=137.08 00:31:08.422 clat (usec): min=213, max=45030, avg=560.84, stdev=3076.23 00:31:08.422 lat (usec): min=219, max=45047, avg=571.71, stdev=3076.66 00:31:08.422 clat percentiles (usec): 00:31:08.422 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 262], 00:31:08.422 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 330], 00:31:08.422 | 70.00th=[ 351], 80.00th=[ 383], 90.00th=[ 424], 95.00th=[ 465], 00:31:08.422 | 99.00th=[ 594], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:31:08.422 | 99.99th=[44827] 00:31:08.422 bw ( KiB/s): min= 312, max=13312, per=28.45%, avg=8154.67, stdev=5334.43, samples=6 00:31:08.422 iops : min= 78, max= 3328, avg=2038.67, stdev=1333.61, samples=6 00:31:08.422 lat (usec) : 250=11.92%, 500=85.28%, 750=2.18% 00:31:08.422 lat (msec) : 4=0.02%, 20=0.02%, 50=0.57% 00:31:08.422 cpu : usr=1.38%, sys=2.76%, ctx=6136, majf=0, minf=2 00:31:08.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.422 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.422 issued rwts: total=6133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:08.422 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3896466: Wed Nov 20 10:04:45 2024 00:31:08.422 read: IOPS=2344, BW=9375KiB/s (9600kB/s)(35.2MiB/3840msec) 00:31:08.422 slat (usec): min=3, max=25903, avg=18.50, stdev=305.21 00:31:08.422 clat (usec): min=174, max=41284, avg=403.70, stdev=1820.41 00:31:08.422 lat (usec): min=188, max=66986, avg=422.20, stdev=1940.65 00:31:08.422 clat percentiles (usec): 00:31:08.422 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 239], 00:31:08.422 | 30.00th=[ 269], 40.00th=[ 297], 50.00th=[ 318], 60.00th=[ 343], 00:31:08.422 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 461], 00:31:08.422 | 99.00th=[ 562], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:31:08.422 | 99.99th=[41157] 00:31:08.422 bw ( KiB/s): min= 1820, max=12576, per=35.68%, avg=10225.71, stdev=3773.64, samples=7 00:31:08.422 iops : min= 455, max= 3144, avg=2556.43, stdev=943.41, samples=7 00:31:08.422 lat (usec) : 250=23.93%, 500=73.38%, 750=2.46%, 1000=0.02% 00:31:08.422 lat (msec) : 50=0.20% 00:31:08.422 cpu : usr=1.22%, sys=4.01%, ctx=9004, majf=0, minf=1 00:31:08.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.422 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.422 issued rwts: total=9001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:08.422 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3896467: Wed Nov 20 10:04:45 2024 00:31:08.422 read: IOPS=3516, BW=13.7MiB/s (14.4MB/s)(44.5MiB/3237msec) 00:31:08.422 slat (nsec): min=5250, max=60821, avg=9438.85, stdev=4887.57 00:31:08.422 clat (usec): min=213, max=40693, avg=270.33, stdev=536.26 00:31:08.422 lat (usec): min=219, max=40700, avg=279.77, stdev=536.35 00:31:08.422 clat percentiles (usec): 00:31:08.422 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:31:08.422 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:31:08.422 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 310], 95.00th=[ 334], 00:31:08.422 | 99.00th=[ 396], 99.50th=[ 433], 99.90th=[ 594], 99.95th=[ 1090], 00:31:08.422 | 99.99th=[40633] 00:31:08.422 bw ( KiB/s): min=13024, max=15896, per=49.39%, avg=14153.33, stdev=1030.47, samples=6 00:31:08.422 iops : min= 3256, max= 3974, avg=3538.33, stdev=257.62, samples=6 00:31:08.422 lat (usec) : 250=46.65%, 500=53.11%, 750=0.15%, 1000=0.03% 00:31:08.422 lat (msec) : 2=0.04%, 50=0.02% 00:31:08.422 cpu : usr=2.32%, sys=5.16%, ctx=11383, majf=0, minf=2 00:31:08.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.422 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.422 issued rwts: total=11383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:08.422 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3896468: Wed Nov 20 10:04:45 2024 00:31:08.422 read: IOPS=339, BW=1358KiB/s (1390kB/s)(3984KiB/2934msec) 00:31:08.422 slat (nsec): min=5382, max=83200, avg=14898.83, stdev=10212.25 00:31:08.422 clat (usec): min=202, max=43966, avg=2903.75, stdev=9991.99 00:31:08.422 lat (usec): min=222, max=43986, avg=2918.65, stdev=9993.21 00:31:08.422 clat percentiles (usec): 00:31:08.422 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:31:08.422 | 30.00th=[ 243], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:31:08.422 | 70.00th=[ 289], 80.00th=[ 351], 90.00th=[ 486], 95.00th=[41157], 00:31:08.422 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:31:08.422 | 99.99th=[43779] 00:31:08.422 bw ( KiB/s): min= 96, max= 7496, per=5.50%, avg=1577.60, stdev=3308.49, samples=5 00:31:08.422 iops : min= 24, max= 1874, avg=394.40, stdev=827.12, samples=5 00:31:08.422 lat (usec) : 250=36.11%, 500=55.07%, 750=2.31% 00:31:08.422 lat (msec) : 50=6.42% 00:31:08.422 cpu : usr=0.20%, sys=0.61%, ctx=998, majf=0, minf=2 00:31:08.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.422 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.422 issued rwts: total=997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:08.422 00:31:08.422 Run status group 0 (all jobs): 00:31:08.423 READ: bw=28.0MiB/s (29.3MB/s), 1358KiB/s-13.7MiB/s (1390kB/s-14.4MB/s), io=107MiB (113MB), run=2934-3840msec 00:31:08.423 00:31:08.423 Disk stats (read/write): 00:31:08.423 nvme0n1: ios=6166/0, merge=0/0, ticks=4247/0, in_queue=4247, util=99.66% 00:31:08.423 nvme0n2: ios=9016/0, merge=0/0, ticks=3421/0, in_queue=3421, util=97.62% 00:31:08.423 nvme0n3: ios=10963/0, merge=0/0, ticks=2894/0, in_queue=2894, util=96.82% 00:31:08.423 nvme0n4: ios=1041/0, merge=0/0, ticks=3811/0, in_queue=3811, util=99.39% 00:31:08.681 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:08.681 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:08.939 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:08.939 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:09.198 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:09.198 10:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:09.455 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:09.455 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3896262 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:09.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:09.713 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:09.971 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:09.971 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:09.971 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:09.971 nvmf hotplug test: fio failed as expected 00:31:09.971 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.228 rmmod nvme_tcp 00:31:10.228 rmmod nvme_fabrics 00:31:10.228 rmmod nvme_keyring 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3894341 ']' 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3894341 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3894341 ']' 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3894341 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.228 10:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3894341 00:31:10.228 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:10.228 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:10.228 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3894341' 00:31:10.228 killing process with pid 3894341 00:31:10.228 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3894341 00:31:10.228 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3894341 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.486 10:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:13.014 00:31:13.014 real 0m24.029s 00:31:13.014 user 1m8.175s 00:31:13.014 sys 0m10.339s 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.014 ************************************ 00:31:13.014 END TEST nvmf_fio_target 00:31:13.014 ************************************ 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:13.014 ************************************ 00:31:13.014 START TEST nvmf_bdevio 00:31:13.014 ************************************ 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:13.014 * Looking for test storage... 00:31:13.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.014 --rc genhtml_branch_coverage=1 00:31:13.014 --rc genhtml_function_coverage=1 00:31:13.014 --rc genhtml_legend=1 00:31:13.014 --rc geninfo_all_blocks=1 00:31:13.014 --rc geninfo_unexecuted_blocks=1 00:31:13.014 00:31:13.014 ' 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.014 --rc genhtml_branch_coverage=1 00:31:13.014 --rc genhtml_function_coverage=1 00:31:13.014 --rc genhtml_legend=1 00:31:13.014 --rc geninfo_all_blocks=1 00:31:13.014 --rc geninfo_unexecuted_blocks=1 00:31:13.014 00:31:13.014 ' 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.014 --rc genhtml_branch_coverage=1 00:31:13.014 --rc genhtml_function_coverage=1 00:31:13.014 --rc genhtml_legend=1 00:31:13.014 --rc geninfo_all_blocks=1 00:31:13.014 --rc geninfo_unexecuted_blocks=1 00:31:13.014 00:31:13.014 ' 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:13.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.014 --rc genhtml_branch_coverage=1 00:31:13.014 --rc genhtml_function_coverage=1 00:31:13.014 --rc genhtml_legend=1 00:31:13.014 --rc geninfo_all_blocks=1 00:31:13.014 --rc geninfo_unexecuted_blocks=1 00:31:13.014 00:31:13.014 ' 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.014 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.015 10:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:14.917 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:14.918 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:14.918 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:14.918 Found net devices under 0000:09:00.0: cvl_0_0 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:14.918 Found net devices under 0000:09:00.1: cvl_0_1 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.918 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.919 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.919 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.919 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:31:15.178 00:31:15.178 --- 10.0.0.2 ping statistics --- 00:31:15.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.178 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:31:15.178 00:31:15.178 --- 10.0.0.1 ping statistics --- 00:31:15.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.178 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3899093 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3899093 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3899093 ']' 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.178 10:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.178 [2024-11-20 10:04:51.912128] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:15.178 [2024-11-20 10:04:51.913232] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:31:15.178 [2024-11-20 10:04:51.913297] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.178 [2024-11-20 10:04:51.984517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.178 [2024-11-20 10:04:52.045105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.178 [2024-11-20 10:04:52.045153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.178 [2024-11-20 10:04:52.045181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.178 [2024-11-20 10:04:52.045193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.178 [2024-11-20 10:04:52.045202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.178 [2024-11-20 10:04:52.046923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:15.178 [2024-11-20 10:04:52.047000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:15.178 [2024-11-20 10:04:52.047065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:15.178 [2024-11-20 10:04:52.047068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.437 [2024-11-20 10:04:52.138723] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:15.437 [2024-11-20 10:04:52.138962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:15.437 [2024-11-20 10:04:52.139204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:15.437 [2024-11-20 10:04:52.139884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:15.437 [2024-11-20 10:04:52.140121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.437 [2024-11-20 10:04:52.191725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.437 Malloc0 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.437 [2024-11-20 10:04:52.259936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.437 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:15.438 { 00:31:15.438 "params": { 00:31:15.438 "name": "Nvme$subsystem", 00:31:15.438 "trtype": "$TEST_TRANSPORT", 00:31:15.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.438 "adrfam": "ipv4", 00:31:15.438 "trsvcid": "$NVMF_PORT", 00:31:15.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.438 "hdgst": ${hdgst:-false}, 00:31:15.438 "ddgst": ${ddgst:-false} 00:31:15.438 }, 00:31:15.438 "method": "bdev_nvme_attach_controller" 00:31:15.438 } 00:31:15.438 EOF 00:31:15.438 )") 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:15.438 10:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:15.438 "params": { 00:31:15.438 "name": "Nvme1", 00:31:15.438 "trtype": "tcp", 00:31:15.438 "traddr": "10.0.0.2", 00:31:15.438 "adrfam": "ipv4", 00:31:15.438 "trsvcid": "4420", 00:31:15.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:15.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:15.438 "hdgst": false, 00:31:15.438 "ddgst": false 00:31:15.438 }, 00:31:15.438 "method": "bdev_nvme_attach_controller" 00:31:15.438 }' 00:31:15.438 [2024-11-20 10:04:52.309611] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:31:15.438 [2024-11-20 10:04:52.309692] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899125 ] 00:31:15.696 [2024-11-20 10:04:52.378384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:15.696 [2024-11-20 10:04:52.446233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.696 [2024-11-20 10:04:52.446295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.696 [2024-11-20 10:04:52.446299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.953 I/O targets: 00:31:15.953 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:15.953 00:31:15.953 00:31:15.953 CUnit - A unit testing framework for C - Version 2.1-3 00:31:15.953 http://cunit.sourceforge.net/ 00:31:15.953 00:31:15.953 00:31:15.953 Suite: bdevio tests on: Nvme1n1 00:31:15.953 Test: blockdev write read block ...passed 00:31:15.953 Test: blockdev write zeroes read block ...passed 00:31:15.953 Test: blockdev write zeroes read no split ...passed 00:31:15.953 Test: blockdev write zeroes read split ...passed 00:31:15.953 Test: blockdev write zeroes read split partial ...passed 00:31:15.953 Test: blockdev reset ...[2024-11-20 10:04:52.740898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:15.953 [2024-11-20 10:04:52.741013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e53640 (9): Bad file descriptor 00:31:15.953 [2024-11-20 10:04:52.793586] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:15.953 passed 00:31:15.953 Test: blockdev write read 8 blocks ...passed 00:31:15.953 Test: blockdev write read size > 128k ...passed 00:31:15.953 Test: blockdev write read invalid size ...passed 00:31:16.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:16.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:16.211 Test: blockdev write read max offset ...passed 00:31:16.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:16.211 Test: blockdev writev readv 8 blocks ...passed 00:31:16.211 Test: blockdev writev readv 30 x 1block ...passed 00:31:16.211 Test: blockdev writev readv block ...passed 00:31:16.211 Test: blockdev writev readv size > 128k ...passed 00:31:16.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:16.469 Test: blockdev comparev and writev ...[2024-11-20 10:04:53.131465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:16.469 [2024-11-20 10:04:53.131515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.131540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:16.469 [2024-11-20 10:04:53.131557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.131923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:16.469 [2024-11-20 10:04:53.131949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.131972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:16.469 [2024-11-20 10:04:53.131988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.132369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:16.469 [2024-11-20 10:04:53.132407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.132430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:16.469 [2024-11-20 10:04:53.132446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.132810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:16.469 [2024-11-20 10:04:53.132835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.132856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:16.469 [2024-11-20 10:04:53.132872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:16.469 passed 00:31:16.469 Test: blockdev nvme passthru rw ...passed 00:31:16.469 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:04:53.214563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:16.469 [2024-11-20 10:04:53.214593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.214749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:16.469 [2024-11-20 10:04:53.214773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.214926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:16.469 [2024-11-20 10:04:53.214950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:16.469 [2024-11-20 10:04:53.215103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:16.469 [2024-11-20 10:04:53.215126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:16.469 passed 00:31:16.469 Test: blockdev nvme admin passthru ...passed 00:31:16.469 Test: blockdev copy ...passed 00:31:16.469 00:31:16.469 Run Summary: Type Total Ran Passed Failed Inactive 00:31:16.469 suites 1 1 n/a 0 0 00:31:16.469 tests 23 23 23 0 0 00:31:16.469 asserts 152 152 152 0 n/a 00:31:16.469 00:31:16.469 Elapsed time = 1.287 seconds 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:16.727 rmmod nvme_tcp 00:31:16.727 rmmod nvme_fabrics 00:31:16.727 rmmod nvme_keyring 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3899093 ']' 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3899093 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3899093 ']' 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3899093 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3899093 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3899093' 00:31:16.727 killing process with pid 3899093 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3899093 00:31:16.727 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3899093 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.986 10:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.521 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.521 00:31:19.521 real 0m6.541s 00:31:19.521 user 0m8.715s 00:31:19.521 sys 0m2.563s 00:31:19.521 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.521 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:19.521 ************************************ 00:31:19.521 END TEST nvmf_bdevio 00:31:19.521 ************************************ 00:31:19.521 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:19.521 00:31:19.521 real 3m55.241s 00:31:19.521 user 8m53.045s 00:31:19.521 sys 1m25.323s 00:31:19.521 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.521 10:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:19.521 ************************************ 00:31:19.521 END TEST nvmf_target_core_interrupt_mode 00:31:19.521 ************************************ 00:31:19.521 10:04:55 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:19.521 10:04:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:19.521 10:04:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.521 10:04:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:19.521 ************************************ 00:31:19.521 START TEST nvmf_interrupt 00:31:19.521 ************************************ 00:31:19.521 10:04:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:19.521 * Looking for test storage... 00:31:19.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:19.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.521 --rc genhtml_branch_coverage=1 00:31:19.521 --rc genhtml_function_coverage=1 00:31:19.521 --rc genhtml_legend=1 00:31:19.521 --rc geninfo_all_blocks=1 00:31:19.521 --rc geninfo_unexecuted_blocks=1 00:31:19.521 00:31:19.521 ' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:19.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.521 --rc genhtml_branch_coverage=1 00:31:19.521 --rc genhtml_function_coverage=1 00:31:19.521 --rc genhtml_legend=1 00:31:19.521 --rc geninfo_all_blocks=1 00:31:19.521 --rc geninfo_unexecuted_blocks=1 00:31:19.521 00:31:19.521 ' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:19.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.521 --rc genhtml_branch_coverage=1 00:31:19.521 --rc genhtml_function_coverage=1 00:31:19.521 --rc genhtml_legend=1 00:31:19.521 --rc geninfo_all_blocks=1 00:31:19.521 --rc geninfo_unexecuted_blocks=1 00:31:19.521 00:31:19.521 ' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:19.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.521 --rc genhtml_branch_coverage=1 00:31:19.521 --rc genhtml_function_coverage=1 00:31:19.521 --rc genhtml_legend=1 00:31:19.521 --rc geninfo_all_blocks=1 00:31:19.521 --rc geninfo_unexecuted_blocks=1 00:31:19.521 00:31:19.521 ' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.521 10:04:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.422 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:21.423 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:21.423 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:21.423 Found net devices under 0000:09:00.0: cvl_0_0 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:21.423 Found net devices under 0000:09:00.1: cvl_0_1 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:31:21.423 00:31:21.423 --- 10.0.0.2 ping statistics --- 00:31:21.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.423 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:21.423 00:31:21.423 --- 10.0.0.1 ping statistics --- 00:31:21.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.423 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3901325 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3901325 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3901325 ']' 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.423 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.682 [2024-11-20 10:04:58.375015] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:21.682 [2024-11-20 10:04:58.376067] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:31:21.682 [2024-11-20 10:04:58.376131] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.682 [2024-11-20 10:04:58.446427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:21.683 [2024-11-20 10:04:58.502421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.683 [2024-11-20 10:04:58.502474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.683 [2024-11-20 10:04:58.502503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.683 [2024-11-20 10:04:58.502514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.683 [2024-11-20 10:04:58.502524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.683 [2024-11-20 10:04:58.503988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.683 [2024-11-20 10:04:58.503994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.683 [2024-11-20 10:04:58.589596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.683 [2024-11-20 10:04:58.589626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.683 [2024-11-20 10:04:58.589875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:21.942 5000+0 records in 00:31:21.942 5000+0 records out 00:31:21.942 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0140469 s, 729 MB/s 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.942 AIO0 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.942 [2024-11-20 10:04:58.688650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:21.942 [2024-11-20 10:04:58.716831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3901325 0 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3901325 0 idle 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3901325 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3901325 -w 256 00:31:21.942 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3901325 root 20 0 128.2g 47616 34944 S 6.2 0.1 0:00.27 reactor_0' 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3901325 root 20 0 128.2g 47616 34944 S 6.2 0.1 0:00.27 reactor_0 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3901325 1 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3901325 1 idle 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3901325 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3901325 -w 256 00:31:22.201 10:04:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3901329 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3901329 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3901376 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3901325 0 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3901325 0 busy 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3901325 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3901325 -w 256 00:31:22.201 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:22.464 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3901325 root 20 0 128.2g 48768 35328 R 80.0 0.1 0:00.39 reactor_0' 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3901325 root 20 0 128.2g 48768 35328 R 80.0 0.1 0:00.39 reactor_0 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=80.0 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=80 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3901325 1 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3901325 1 busy 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3901325 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:22.465 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:22.466 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:22.466 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:22.466 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3901325 -w 256 00:31:22.466 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3901329 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.23 reactor_1' 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3901329 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.23 reactor_1 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:22.731 10:04:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3901376 00:31:32.724 Initializing NVMe Controllers 00:31:32.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.724 Controller IO queue size 256, less than required. 00:31:32.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:32.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:32.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:32.724 Initialization complete. Launching workers. 00:31:32.724 ======================================================== 00:31:32.724 Latency(us) 00:31:32.724 Device Information : IOPS MiB/s Average min max 00:31:32.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13786.94 53.86 18581.30 4160.63 22355.95 00:31:32.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13691.94 53.48 18710.79 3985.42 23113.09 00:31:32.724 ======================================================== 00:31:32.724 Total : 27478.88 107.34 18645.83 3985.42 23113.09 00:31:32.724 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3901325 0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3901325 0 idle 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3901325 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3901325 -w 256 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3901325 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.21 reactor_0' 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3901325 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.21 reactor_0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3901325 1 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3901325 1 idle 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3901325 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3901325 -w 256 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3901329 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1' 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3901329 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:32.724 10:05:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:32.983 10:05:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:32.983 10:05:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:31:32.983 10:05:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:32.983 10:05:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:32.983 10:05:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3901325 0 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3901325 0 idle 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3901325 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:35.512 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:35.513 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:35.513 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:35.513 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:35.513 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:35.513 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:35.513 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:35.513 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3901325 -w 256 00:31:35.513 10:05:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3901325 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.32 reactor_0' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3901325 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.32 reactor_0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3901325 1 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3901325 1 idle 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3901325 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3901325 -w 256 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3901329 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.02 reactor_1' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3901329 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.02 reactor_1 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:35.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.513 rmmod nvme_tcp 00:31:35.513 rmmod nvme_fabrics 00:31:35.513 rmmod nvme_keyring 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3901325 ']' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3901325 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3901325 ']' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3901325 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3901325 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3901325' 00:31:35.513 killing process with pid 3901325 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3901325 00:31:35.513 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3901325 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.771 10:05:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.301 10:05:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:38.301 00:31:38.301 real 0m18.737s 00:31:38.301 user 0m36.882s 00:31:38.301 sys 0m6.527s 00:31:38.301 10:05:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.301 10:05:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:38.301 ************************************ 00:31:38.301 END TEST nvmf_interrupt 00:31:38.301 ************************************ 00:31:38.301 00:31:38.301 real 25m4.718s 00:31:38.301 user 58m47.553s 00:31:38.301 sys 6m40.370s 00:31:38.301 10:05:14 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.301 10:05:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.301 ************************************ 00:31:38.301 END TEST nvmf_tcp 00:31:38.301 ************************************ 00:31:38.301 10:05:14 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:31:38.301 10:05:14 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:38.301 10:05:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:38.301 10:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:38.301 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:31:38.301 ************************************ 00:31:38.301 START TEST spdkcli_nvmf_tcp 00:31:38.301 ************************************ 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:38.301 * Looking for test storage... 00:31:38.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.301 --rc genhtml_branch_coverage=1 00:31:38.301 --rc genhtml_function_coverage=1 00:31:38.301 --rc genhtml_legend=1 00:31:38.301 --rc geninfo_all_blocks=1 00:31:38.301 --rc geninfo_unexecuted_blocks=1 00:31:38.301 00:31:38.301 ' 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.301 --rc genhtml_branch_coverage=1 00:31:38.301 --rc genhtml_function_coverage=1 00:31:38.301 --rc genhtml_legend=1 00:31:38.301 --rc geninfo_all_blocks=1 00:31:38.301 --rc geninfo_unexecuted_blocks=1 00:31:38.301 00:31:38.301 ' 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.301 --rc genhtml_branch_coverage=1 00:31:38.301 --rc genhtml_function_coverage=1 00:31:38.301 --rc genhtml_legend=1 00:31:38.301 --rc geninfo_all_blocks=1 00:31:38.301 --rc geninfo_unexecuted_blocks=1 00:31:38.301 00:31:38.301 ' 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.301 --rc genhtml_branch_coverage=1 00:31:38.301 --rc genhtml_function_coverage=1 00:31:38.301 --rc genhtml_legend=1 00:31:38.301 --rc geninfo_all_blocks=1 00:31:38.301 --rc geninfo_unexecuted_blocks=1 00:31:38.301 00:31:38.301 ' 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.301 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:38.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3903369 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3903369 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3903369 ']' 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.302 10:05:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.302 [2024-11-20 10:05:14.990161] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:31:38.302 [2024-11-20 10:05:14.990246] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903369 ] 00:31:38.302 [2024-11-20 10:05:15.057356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:38.302 [2024-11-20 10:05:15.118956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.302 [2024-11-20 10:05:15.118961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.560 10:05:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:38.560 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:38.560 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:38.560 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:38.560 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:38.560 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:38.560 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:38.560 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.560 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.560 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:38.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:38.560 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:38.560 ' 00:31:41.168 [2024-11-20 10:05:17.888410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.549 [2024-11-20 10:05:19.208934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:45.076 [2024-11-20 10:05:21.560181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:46.975 [2024-11-20 10:05:23.610528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:48.348 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:48.348 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:48.348 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:48.348 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:48.348 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:48.348 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:48.348 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:48.348 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:48.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:48.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:48.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:48.348 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:48.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:48.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:48.348 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:48.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:48.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:48.349 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:48.349 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:48.607 10:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:48.607 10:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.607 10:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.607 10:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:48.607 10:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.607 10:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.607 10:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:48.607 10:05:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:48.865 10:05:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:48.865 10:05:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:48.865 10:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:48.865 10:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.865 10:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:49.125 10:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:49.125 10:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.125 10:05:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:49.125 10:05:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:49.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:49.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:49.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:49.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:49.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:49.125 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:49.125 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:49.125 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:49.125 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:49.125 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:49.125 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:49.125 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:49.125 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:49.125 ' 00:31:54.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:54.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:54.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:54.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:54.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:54.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:54.386 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:54.386 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:54.386 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:54.386 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:54.386 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:54.386 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:54.386 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:54.386 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3903369 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3903369 ']' 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3903369 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3903369 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3903369' 00:31:54.386 killing process with pid 3903369 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3903369 00:31:54.386 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3903369 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3903369 ']' 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3903369 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3903369 ']' 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3903369 00:31:54.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3903369) - No such process 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3903369 is not found' 00:31:54.645 Process with pid 3903369 is not found 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:54.645 00:31:54.645 real 0m16.682s 00:31:54.645 user 0m35.642s 00:31:54.645 sys 0m0.746s 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.645 10:05:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.645 ************************************ 00:31:54.645 END TEST spdkcli_nvmf_tcp 00:31:54.645 ************************************ 00:31:54.645 10:05:31 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:54.645 10:05:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:54.645 10:05:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.645 10:05:31 -- common/autotest_common.sh@10 -- # set +x 00:31:54.645 ************************************ 00:31:54.645 START TEST nvmf_identify_passthru 00:31:54.645 ************************************ 00:31:54.645 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:54.645 * Looking for test storage... 00:31:54.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:54.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.904 --rc genhtml_branch_coverage=1 00:31:54.904 --rc genhtml_function_coverage=1 00:31:54.904 --rc genhtml_legend=1 00:31:54.904 --rc geninfo_all_blocks=1 00:31:54.904 --rc geninfo_unexecuted_blocks=1 00:31:54.904 00:31:54.904 ' 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:54.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.904 --rc genhtml_branch_coverage=1 00:31:54.904 --rc genhtml_function_coverage=1 00:31:54.904 --rc genhtml_legend=1 00:31:54.904 --rc geninfo_all_blocks=1 00:31:54.904 --rc geninfo_unexecuted_blocks=1 00:31:54.904 00:31:54.904 ' 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:54.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.904 --rc genhtml_branch_coverage=1 00:31:54.904 --rc genhtml_function_coverage=1 00:31:54.904 --rc genhtml_legend=1 00:31:54.904 --rc geninfo_all_blocks=1 00:31:54.904 --rc geninfo_unexecuted_blocks=1 00:31:54.904 00:31:54.904 ' 00:31:54.904 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:54.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.904 --rc genhtml_branch_coverage=1 00:31:54.904 --rc genhtml_function_coverage=1 00:31:54.904 --rc genhtml_legend=1 00:31:54.904 --rc geninfo_all_blocks=1 00:31:54.904 --rc geninfo_unexecuted_blocks=1 00:31:54.904 00:31:54.904 ' 00:31:54.904 10:05:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.904 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.904 10:05:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.904 10:05:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:54.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.905 10:05:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.905 10:05:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.905 10:05:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.905 10:05:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.905 10:05:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:54.905 10:05:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.905 10:05:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.905 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:54.905 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.905 10:05:31 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.905 10:05:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.441 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:57.441 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:57.442 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:57.442 Found net devices under 0000:09:00.0: cvl_0_0 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:57.442 Found net devices under 0000:09:00.1: cvl_0_1 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:57.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:31:57.442 00:31:57.442 --- 10.0.0.2 ping statistics --- 00:31:57.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.442 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:31:57.442 00:31:57.442 --- 10.0.0.1 ping statistics --- 00:31:57.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.442 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:57.442 10:05:33 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:57.442 10:05:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.442 10:05:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:57.442 10:05:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:57.442 10:05:34 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:57.442 10:05:34 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:31:57.442 10:05:34 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:0b:00.0 00:31:57.442 10:05:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:31:57.442 10:05:34 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:31:57.442 10:05:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:31:57.442 10:05:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:57.442 10:05:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:01.633 10:05:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:32:01.633 10:05:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:32:01.633 10:05:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:01.633 10:05:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:05.819 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:05.819 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:05.819 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:05.819 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:05.820 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:05.820 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3908002 00:32:05.820 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:05.820 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:05.820 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3908002 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3908002 ']' 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:05.820 [2024-11-20 10:05:42.379656] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:32:05.820 [2024-11-20 10:05:42.379746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.820 [2024-11-20 10:05:42.472404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:05.820 [2024-11-20 10:05:42.545616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.820 [2024-11-20 10:05:42.545677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.820 [2024-11-20 10:05:42.545718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.820 [2024-11-20 10:05:42.545740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.820 [2024-11-20 10:05:42.545774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.820 [2024-11-20 10:05:42.547745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.820 [2024-11-20 10:05:42.547811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:05.820 [2024-11-20 10:05:42.547875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.820 [2024-11-20 10:05:42.547884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:05.820 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:05.820 INFO: Log level set to 20 00:32:05.820 INFO: Requests: 00:32:05.820 { 00:32:05.820 "jsonrpc": "2.0", 00:32:05.820 "method": "nvmf_set_config", 00:32:05.820 "id": 1, 00:32:05.820 "params": { 00:32:05.820 "admin_cmd_passthru": { 00:32:05.820 "identify_ctrlr": true 00:32:05.820 } 00:32:05.820 } 00:32:05.820 } 00:32:05.820 00:32:05.820 INFO: response: 00:32:05.820 { 00:32:05.820 "jsonrpc": "2.0", 00:32:05.820 "id": 1, 00:32:05.820 "result": true 00:32:05.820 } 00:32:05.820 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.820 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.820 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:05.820 INFO: Setting log level to 20 00:32:05.820 INFO: Setting log level to 20 00:32:05.820 INFO: Log level set to 20 00:32:05.820 INFO: Log level set to 20 00:32:05.820 INFO: Requests: 00:32:05.820 { 00:32:05.820 "jsonrpc": "2.0", 00:32:05.820 "method": "framework_start_init", 00:32:05.820 "id": 1 00:32:05.820 } 00:32:05.820 00:32:05.820 INFO: Requests: 00:32:05.820 { 00:32:05.820 "jsonrpc": "2.0", 00:32:05.820 "method": "framework_start_init", 00:32:05.820 "id": 1 00:32:05.820 } 00:32:05.820 00:32:06.078 [2024-11-20 10:05:42.813542] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:06.078 INFO: response: 00:32:06.078 { 00:32:06.078 "jsonrpc": "2.0", 00:32:06.078 "id": 1, 00:32:06.078 "result": true 00:32:06.078 } 00:32:06.078 00:32:06.078 INFO: response: 00:32:06.078 { 00:32:06.078 "jsonrpc": "2.0", 00:32:06.078 "id": 1, 00:32:06.078 "result": true 00:32:06.078 } 00:32:06.078 00:32:06.078 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.078 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.078 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.078 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:06.078 INFO: Setting log level to 40 00:32:06.078 INFO: Setting log level to 40 00:32:06.078 INFO: Setting log level to 40 00:32:06.078 [2024-11-20 10:05:42.823503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.078 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.078 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:06.078 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:06.078 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:06.078 10:05:42 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:32:06.078 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.078 10:05:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:09.359 Nvme0n1 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:09.359 [2024-11-20 10:05:45.723859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:09.359 [ 00:32:09.359 { 00:32:09.359 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:09.359 "subtype": "Discovery", 00:32:09.359 "listen_addresses": [], 00:32:09.359 "allow_any_host": true, 00:32:09.359 "hosts": [] 00:32:09.359 }, 00:32:09.359 { 00:32:09.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.359 "subtype": "NVMe", 00:32:09.359 "listen_addresses": [ 00:32:09.359 { 00:32:09.359 "trtype": "TCP", 00:32:09.359 "adrfam": "IPv4", 00:32:09.359 "traddr": "10.0.0.2", 00:32:09.359 "trsvcid": "4420" 00:32:09.359 } 00:32:09.359 ], 00:32:09.359 "allow_any_host": true, 00:32:09.359 "hosts": [], 00:32:09.359 "serial_number": "SPDK00000000000001", 00:32:09.359 "model_number": "SPDK bdev Controller", 00:32:09.359 "max_namespaces": 1, 00:32:09.359 "min_cntlid": 1, 00:32:09.359 "max_cntlid": 65519, 00:32:09.359 "namespaces": [ 00:32:09.359 { 00:32:09.359 "nsid": 1, 00:32:09.359 "bdev_name": "Nvme0n1", 00:32:09.359 "name": "Nvme0n1", 00:32:09.359 "nguid": "FA143400257B4F839F60BD5326E7140F", 00:32:09.359 "uuid": "fa143400-257b-4f83-9f60-bd5326e7140f" 00:32:09.359 } 00:32:09.359 ] 00:32:09.359 } 00:32:09.359 ] 00:32:09.359 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:09.359 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:09.360 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:32:09.360 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:09.360 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:09.360 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.360 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:09.360 10:05:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.360 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:09.360 10:05:45 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:09.360 10:05:45 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.360 10:05:45 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.360 rmmod nvme_tcp 00:32:09.360 rmmod nvme_fabrics 00:32:09.360 rmmod nvme_keyring 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3908002 ']' 00:32:09.360 10:05:46 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3908002 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3908002 ']' 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3908002 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3908002 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3908002' 00:32:09.360 killing process with pid 3908002 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3908002 00:32:09.360 10:05:46 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3908002 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:10.732 10:05:47 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.732 10:05:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:10.732 10:05:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.268 10:05:49 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:13.268 00:32:13.268 real 0m18.132s 00:32:13.268 user 0m26.053s 00:32:13.268 sys 0m3.188s 00:32:13.268 10:05:49 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.268 10:05:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:13.268 ************************************ 00:32:13.268 END TEST nvmf_identify_passthru 00:32:13.268 ************************************ 00:32:13.268 10:05:49 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:13.268 10:05:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:13.268 10:05:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.268 10:05:49 -- common/autotest_common.sh@10 -- # set +x 00:32:13.268 ************************************ 00:32:13.268 START TEST nvmf_dif 00:32:13.268 ************************************ 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:13.268 * Looking for test storage... 00:32:13.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:13.268 10:05:49 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:13.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.268 --rc genhtml_branch_coverage=1 00:32:13.268 --rc genhtml_function_coverage=1 00:32:13.268 --rc genhtml_legend=1 00:32:13.268 --rc geninfo_all_blocks=1 00:32:13.268 --rc geninfo_unexecuted_blocks=1 00:32:13.268 00:32:13.268 ' 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:13.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.268 --rc genhtml_branch_coverage=1 00:32:13.268 --rc genhtml_function_coverage=1 00:32:13.268 --rc genhtml_legend=1 00:32:13.268 --rc geninfo_all_blocks=1 00:32:13.268 --rc geninfo_unexecuted_blocks=1 00:32:13.268 00:32:13.268 ' 00:32:13.268 10:05:49 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:13.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.268 --rc genhtml_branch_coverage=1 00:32:13.269 --rc genhtml_function_coverage=1 00:32:13.269 --rc genhtml_legend=1 00:32:13.269 --rc geninfo_all_blocks=1 00:32:13.269 --rc geninfo_unexecuted_blocks=1 00:32:13.269 00:32:13.269 ' 00:32:13.269 10:05:49 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:13.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.269 --rc genhtml_branch_coverage=1 00:32:13.269 --rc genhtml_function_coverage=1 00:32:13.269 --rc genhtml_legend=1 00:32:13.269 --rc geninfo_all_blocks=1 00:32:13.269 --rc geninfo_unexecuted_blocks=1 00:32:13.269 00:32:13.269 ' 00:32:13.269 10:05:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.269 10:05:49 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:13.269 10:05:49 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.269 10:05:49 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.269 10:05:49 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.269 10:05:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.269 10:05:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.269 10:05:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.269 10:05:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:13.269 10:05:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:13.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:13.269 10:05:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:13.269 10:05:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:13.269 10:05:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:13.269 10:05:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:13.269 10:05:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.269 10:05:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:13.269 10:05:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:13.269 10:05:49 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:13.269 10:05:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.171 10:05:52 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:15.172 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:15.172 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:15.172 Found net devices under 0000:09:00.0: cvl_0_0 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:15.172 Found net devices under 0000:09:00.1: cvl_0_1 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.172 10:05:52 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:32:15.430 00:32:15.430 --- 10.0.0.2 ping statistics --- 00:32:15.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.430 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:32:15.430 00:32:15.430 --- 10.0.0.1 ping statistics --- 00:32:15.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.430 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:15.430 10:05:52 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:16.366 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:16.366 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:16.366 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:16.366 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:16.366 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:16.366 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:16.366 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:16.366 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:16.366 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:16.366 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:16.366 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:16.366 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:16.624 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:16.624 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:16.624 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:16.624 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:16.624 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:16.624 10:05:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:16.624 10:05:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:16.624 10:05:53 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.624 10:05:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3911276 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:16.624 10:05:53 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3911276 00:32:16.624 10:05:53 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3911276 ']' 00:32:16.624 10:05:53 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.624 10:05:53 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.624 10:05:53 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.624 10:05:53 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.624 10:05:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:16.624 [2024-11-20 10:05:53.526211] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:32:16.624 [2024-11-20 10:05:53.526277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.882 [2024-11-20 10:05:53.599326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.882 [2024-11-20 10:05:53.656531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.882 [2024-11-20 10:05:53.656599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.883 [2024-11-20 10:05:53.656613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.883 [2024-11-20 10:05:53.656624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.883 [2024-11-20 10:05:53.656634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.883 [2024-11-20 10:05:53.657226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:32:17.141 10:05:53 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.141 10:05:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.141 10:05:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:17.141 10:05:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.141 [2024-11-20 10:05:53.831890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.141 10:05:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.141 10:05:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.141 ************************************ 00:32:17.141 START TEST fio_dif_1_default 00:32:17.141 ************************************ 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:17.141 bdev_null0 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:17.141 [2024-11-20 10:05:53.888158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:17.141 { 00:32:17.141 "params": { 00:32:17.141 "name": "Nvme$subsystem", 00:32:17.141 "trtype": "$TEST_TRANSPORT", 00:32:17.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.141 "adrfam": "ipv4", 00:32:17.141 "trsvcid": "$NVMF_PORT", 00:32:17.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.141 "hdgst": ${hdgst:-false}, 00:32:17.141 "ddgst": ${ddgst:-false} 00:32:17.141 }, 00:32:17.141 "method": "bdev_nvme_attach_controller" 00:32:17.141 } 00:32:17.141 EOF 00:32:17.141 )") 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:17.141 10:05:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:17.142 "params": { 00:32:17.142 "name": "Nvme0", 00:32:17.142 "trtype": "tcp", 00:32:17.142 "traddr": "10.0.0.2", 00:32:17.142 "adrfam": "ipv4", 00:32:17.142 "trsvcid": "4420", 00:32:17.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.142 "hdgst": false, 00:32:17.142 "ddgst": false 00:32:17.142 }, 00:32:17.142 "method": "bdev_nvme_attach_controller" 00:32:17.142 }' 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:17.142 10:05:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.400 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:17.400 fio-3.35 00:32:17.400 Starting 1 thread 00:32:29.614 00:32:29.614 filename0: (groupid=0, jobs=1): err= 0: pid=3911502: Wed Nov 20 10:06:04 2024 00:32:29.614 read: IOPS=99, BW=396KiB/s (406kB/s)(3968KiB/10016msec) 00:32:29.614 slat (usec): min=6, max=101, avg= 9.49, stdev= 5.12 00:32:29.614 clat (usec): min=551, max=46250, avg=40354.64, stdev=5100.86 00:32:29.614 lat (usec): min=558, max=46283, avg=40364.13, stdev=5100.57 00:32:29.614 clat percentiles (usec): 00:32:29.614 | 1.00th=[ 652], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:29.614 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:29.614 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:29.615 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:32:29.615 | 99.99th=[46400] 00:32:29.615 bw ( KiB/s): min= 384, max= 416, per=99.71%, avg=395.20, stdev=15.66, samples=20 00:32:29.615 iops : min= 96, max= 104, avg=98.80, stdev= 3.91, samples=20 00:32:29.615 lat (usec) : 750=1.61% 00:32:29.615 lat (msec) : 50=98.39% 00:32:29.615 cpu : usr=91.20%, sys=8.52%, ctx=13, majf=0, minf=230 00:32:29.615 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:29.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.615 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.615 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:29.615 00:32:29.615 Run status group 0 (all jobs): 00:32:29.615 READ: bw=396KiB/s (406kB/s), 396KiB/s-396KiB/s (406kB/s-406kB/s), io=3968KiB (4063kB), run=10016-10016msec 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 00:32:29.615 real 0m11.140s 00:32:29.615 user 0m10.194s 00:32:29.615 sys 0m1.124s 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.615 10:06:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 ************************************ 00:32:29.615 END TEST fio_dif_1_default 00:32:29.615 ************************************ 00:32:29.615 10:06:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:29.615 10:06:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:29.615 10:06:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 ************************************ 00:32:29.615 START TEST fio_dif_1_multi_subsystems 00:32:29.615 ************************************ 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 bdev_null0 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 [2024-11-20 10:06:05.084570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 bdev_null1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:29.615 { 00:32:29.615 "params": { 00:32:29.615 "name": "Nvme$subsystem", 00:32:29.615 "trtype": "$TEST_TRANSPORT", 00:32:29.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.615 "adrfam": "ipv4", 00:32:29.615 "trsvcid": "$NVMF_PORT", 00:32:29.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.615 "hdgst": ${hdgst:-false}, 00:32:29.615 "ddgst": ${ddgst:-false} 00:32:29.615 }, 00:32:29.615 "method": "bdev_nvme_attach_controller" 00:32:29.615 } 00:32:29.615 EOF 00:32:29.615 )") 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:29.615 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:29.616 { 00:32:29.616 "params": { 00:32:29.616 "name": "Nvme$subsystem", 00:32:29.616 "trtype": "$TEST_TRANSPORT", 00:32:29.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.616 "adrfam": "ipv4", 00:32:29.616 "trsvcid": "$NVMF_PORT", 00:32:29.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.616 "hdgst": ${hdgst:-false}, 00:32:29.616 "ddgst": ${ddgst:-false} 00:32:29.616 }, 00:32:29.616 "method": "bdev_nvme_attach_controller" 00:32:29.616 } 00:32:29.616 EOF 00:32:29.616 )") 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:29.616 "params": { 00:32:29.616 "name": "Nvme0", 00:32:29.616 "trtype": "tcp", 00:32:29.616 "traddr": "10.0.0.2", 00:32:29.616 "adrfam": "ipv4", 00:32:29.616 "trsvcid": "4420", 00:32:29.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.616 "hdgst": false, 00:32:29.616 "ddgst": false 00:32:29.616 }, 00:32:29.616 "method": "bdev_nvme_attach_controller" 00:32:29.616 },{ 00:32:29.616 "params": { 00:32:29.616 "name": "Nvme1", 00:32:29.616 "trtype": "tcp", 00:32:29.616 "traddr": "10.0.0.2", 00:32:29.616 "adrfam": "ipv4", 00:32:29.616 "trsvcid": "4420", 00:32:29.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:29.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:29.616 "hdgst": false, 00:32:29.616 "ddgst": false 00:32:29.616 }, 00:32:29.616 "method": "bdev_nvme_attach_controller" 00:32:29.616 }' 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:29.616 10:06:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.616 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:29.616 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:29.616 fio-3.35 00:32:29.616 Starting 2 threads 00:32:39.643 00:32:39.643 filename0: (groupid=0, jobs=1): err= 0: pid=3913024: Wed Nov 20 10:06:16 2024 00:32:39.643 read: IOPS=98, BW=396KiB/s (405kB/s)(3968KiB/10028msec) 00:32:39.643 slat (nsec): min=6722, max=31734, avg=9979.22, stdev=2965.32 00:32:39.643 clat (usec): min=901, max=42012, avg=40404.11, stdev=5064.94 00:32:39.643 lat (usec): min=909, max=42030, avg=40414.08, stdev=5064.76 00:32:39.643 clat percentiles (usec): 00:32:39.643 | 1.00th=[ 922], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:39.643 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:39.643 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:39.643 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:39.643 | 99.99th=[42206] 00:32:39.643 bw ( KiB/s): min= 384, max= 448, per=50.73%, avg=395.20, stdev=18.79, samples=20 00:32:39.643 iops : min= 96, max= 112, avg=98.80, stdev= 4.70, samples=20 00:32:39.643 lat (usec) : 1000=1.61% 00:32:39.643 lat (msec) : 50=98.39% 00:32:39.643 cpu : usr=94.71%, sys=4.83%, ctx=25, majf=0, minf=143 00:32:39.643 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.643 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.643 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:39.643 filename1: (groupid=0, jobs=1): err= 0: pid=3913025: Wed Nov 20 10:06:16 2024 00:32:39.643 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10003msec) 00:32:39.643 slat (nsec): min=7412, max=81918, avg=9756.44, stdev=3566.28 00:32:39.643 clat (usec): min=553, max=42085, avg=41648.21, stdev=2682.41 00:32:39.643 lat (usec): min=561, max=42099, avg=41657.97, stdev=2682.42 00:32:39.643 clat percentiles (usec): 00:32:39.643 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[42206], 00:32:39.643 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:39.643 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:39.643 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:39.643 | 99.99th=[42206] 00:32:39.643 bw ( KiB/s): min= 352, max= 416, per=49.06%, avg=382.40, stdev=12.61, samples=20 00:32:39.643 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:32:39.643 lat (usec) : 750=0.42% 00:32:39.643 lat (msec) : 50=99.58% 00:32:39.643 cpu : usr=94.78%, sys=4.91%, ctx=13, majf=0, minf=156 00:32:39.643 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.643 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.643 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:39.643 00:32:39.643 Run status group 0 (all jobs): 00:32:39.644 READ: bw=779KiB/s (797kB/s), 384KiB/s-396KiB/s (393kB/s-405kB/s), io=7808KiB (7995kB), run=10003-10028msec 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.644 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:39.902 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.902 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:39.902 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.902 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:39.902 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.902 00:32:39.902 real 0m11.515s 00:32:39.902 user 0m20.592s 00:32:39.902 sys 0m1.301s 00:32:39.902 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.902 10:06:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:39.902 ************************************ 00:32:39.902 END TEST fio_dif_1_multi_subsystems 00:32:39.902 ************************************ 00:32:39.902 10:06:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:39.902 10:06:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:39.902 10:06:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.902 10:06:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:39.903 ************************************ 00:32:39.903 START TEST fio_dif_rand_params 00:32:39.903 ************************************ 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:39.903 bdev_null0 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:39.903 [2024-11-20 10:06:16.651327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:39.903 { 00:32:39.903 "params": { 00:32:39.903 "name": "Nvme$subsystem", 00:32:39.903 "trtype": "$TEST_TRANSPORT", 00:32:39.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.903 "adrfam": "ipv4", 00:32:39.903 "trsvcid": "$NVMF_PORT", 00:32:39.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.903 "hdgst": ${hdgst:-false}, 00:32:39.903 "ddgst": ${ddgst:-false} 00:32:39.903 }, 00:32:39.903 "method": "bdev_nvme_attach_controller" 00:32:39.903 } 00:32:39.903 EOF 00:32:39.903 )") 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:39.903 "params": { 00:32:39.903 "name": "Nvme0", 00:32:39.903 "trtype": "tcp", 00:32:39.903 "traddr": "10.0.0.2", 00:32:39.903 "adrfam": "ipv4", 00:32:39.903 "trsvcid": "4420", 00:32:39.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:39.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:39.903 "hdgst": false, 00:32:39.903 "ddgst": false 00:32:39.903 }, 00:32:39.903 "method": "bdev_nvme_attach_controller" 00:32:39.903 }' 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:39.903 10:06:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:40.162 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:40.162 ... 00:32:40.162 fio-3.35 00:32:40.162 Starting 3 threads 00:32:46.716 00:32:46.716 filename0: (groupid=0, jobs=1): err= 0: pid=3914931: Wed Nov 20 10:06:22 2024 00:32:46.716 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(143MiB/5046msec) 00:32:46.716 slat (nsec): min=4754, max=58581, avg=15207.63, stdev=4044.87 00:32:46.716 clat (usec): min=5077, max=92181, avg=13141.65, stdev=7322.83 00:32:46.716 lat (usec): min=5113, max=92195, avg=13156.86, stdev=7322.81 00:32:46.716 clat percentiles (usec): 00:32:46.716 | 1.00th=[ 5669], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9765], 00:32:46.716 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:32:46.716 | 70.00th=[13435], 80.00th=[14091], 90.00th=[15401], 95.00th=[16188], 00:32:46.716 | 99.00th=[51119], 99.50th=[53740], 99.90th=[87557], 99.95th=[91751], 00:32:46.716 | 99.99th=[91751] 00:32:46.716 bw ( KiB/s): min=24320, max=32320, per=33.79%, avg=29292.80, stdev=2856.13, samples=10 00:32:46.716 iops : min= 190, max= 252, avg=228.80, stdev=22.26, samples=10 00:32:46.716 lat (msec) : 10=21.53%, 20=75.59%, 50=1.48%, 100=1.39% 00:32:46.716 cpu : usr=85.51%, sys=9.34%, ctx=440, majf=0, minf=75 00:32:46.716 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.716 issued rwts: total=1147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.716 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:46.716 filename0: (groupid=0, jobs=1): err= 0: pid=3914932: Wed Nov 20 10:06:22 2024 00:32:46.716 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(147MiB/5046msec) 00:32:46.716 slat (nsec): min=4599, max=29187, avg=14365.84, stdev=2411.02 00:32:46.716 clat (usec): min=4518, max=90287, avg=12806.70, stdev=7364.44 00:32:46.716 lat (usec): min=4531, max=90300, avg=12821.07, stdev=7364.38 00:32:46.716 clat percentiles (usec): 00:32:46.716 | 1.00th=[ 5080], 5.00th=[ 7373], 10.00th=[ 8225], 20.00th=[ 9241], 00:32:46.716 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12518], 00:32:46.716 | 70.00th=[13042], 80.00th=[13829], 90.00th=[14877], 95.00th=[15795], 00:32:46.716 | 99.00th=[52167], 99.50th=[53216], 99.90th=[89654], 99.95th=[90702], 00:32:46.716 | 99.99th=[90702] 00:32:46.716 bw ( KiB/s): min=25088, max=34560, per=34.68%, avg=30059.80, stdev=3186.09, samples=10 00:32:46.716 iops : min= 196, max= 270, avg=234.80, stdev=24.93, samples=10 00:32:46.716 lat (msec) : 10=23.62%, 20=73.32%, 50=1.78%, 100=1.27% 00:32:46.716 cpu : usr=93.10%, sys=6.03%, ctx=171, majf=0, minf=90 00:32:46.716 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.716 issued rwts: total=1177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.716 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:46.716 filename0: (groupid=0, jobs=1): err= 0: pid=3914933: Wed Nov 20 10:06:22 2024 00:32:46.716 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(137MiB/5006msec) 00:32:46.716 slat (nsec): min=4504, max=86686, avg=14666.88, stdev=4907.64 00:32:46.716 clat (usec): min=4595, max=88897, avg=13715.48, stdev=9761.06 00:32:46.716 lat (usec): min=4604, max=88911, avg=13730.15, stdev=9760.89 00:32:46.716 clat percentiles (usec): 00:32:46.716 | 1.00th=[ 4752], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[10552], 00:32:46.716 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:32:46.716 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13435], 95.00th=[49546], 00:32:46.716 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54264], 99.95th=[88605], 00:32:46.716 | 99.99th=[88605] 00:32:46.716 bw ( KiB/s): min=19200, max=35584, per=32.22%, avg=27929.60, stdev=4493.51, samples=10 00:32:46.716 iops : min= 150, max= 278, avg=218.20, stdev=35.11, samples=10 00:32:46.716 lat (msec) : 10=14.18%, 20=79.87%, 50=1.46%, 100=4.48% 00:32:46.716 cpu : usr=90.75%, sys=7.39%, ctx=108, majf=0, minf=133 00:32:46.716 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.716 issued rwts: total=1093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.716 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:46.716 00:32:46.716 Run status group 0 (all jobs): 00:32:46.716 READ: bw=84.6MiB/s (88.8MB/s), 27.3MiB/s-29.2MiB/s (28.6MB/s-30.6MB/s), io=427MiB (448MB), run=5006-5046msec 00:32:46.716 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:46.716 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:46.716 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:46.716 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 bdev_null0 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 [2024-11-20 10:06:23.007970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 bdev_null1 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 bdev_null2 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:46.717 { 00:32:46.717 "params": { 00:32:46.717 "name": "Nvme$subsystem", 00:32:46.717 "trtype": "$TEST_TRANSPORT", 00:32:46.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:46.717 "adrfam": "ipv4", 00:32:46.717 "trsvcid": "$NVMF_PORT", 00:32:46.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:46.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:46.717 "hdgst": ${hdgst:-false}, 00:32:46.717 "ddgst": ${ddgst:-false} 00:32:46.717 }, 00:32:46.717 "method": "bdev_nvme_attach_controller" 00:32:46.717 } 00:32:46.717 EOF 00:32:46.717 )") 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:46.717 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:46.718 { 00:32:46.718 "params": { 00:32:46.718 "name": "Nvme$subsystem", 00:32:46.718 "trtype": "$TEST_TRANSPORT", 00:32:46.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:46.718 "adrfam": "ipv4", 00:32:46.718 "trsvcid": "$NVMF_PORT", 00:32:46.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:46.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:46.718 "hdgst": ${hdgst:-false}, 00:32:46.718 "ddgst": ${ddgst:-false} 00:32:46.718 }, 00:32:46.718 "method": "bdev_nvme_attach_controller" 00:32:46.718 } 00:32:46.718 EOF 00:32:46.718 )") 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:46.718 { 00:32:46.718 "params": { 00:32:46.718 "name": "Nvme$subsystem", 00:32:46.718 "trtype": "$TEST_TRANSPORT", 00:32:46.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:46.718 "adrfam": "ipv4", 00:32:46.718 "trsvcid": "$NVMF_PORT", 00:32:46.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:46.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:46.718 "hdgst": ${hdgst:-false}, 00:32:46.718 "ddgst": ${ddgst:-false} 00:32:46.718 }, 00:32:46.718 "method": "bdev_nvme_attach_controller" 00:32:46.718 } 00:32:46.718 EOF 00:32:46.718 )") 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:46.718 "params": { 00:32:46.718 "name": "Nvme0", 00:32:46.718 "trtype": "tcp", 00:32:46.718 "traddr": "10.0.0.2", 00:32:46.718 "adrfam": "ipv4", 00:32:46.718 "trsvcid": "4420", 00:32:46.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.718 "hdgst": false, 00:32:46.718 "ddgst": false 00:32:46.718 }, 00:32:46.718 "method": "bdev_nvme_attach_controller" 00:32:46.718 },{ 00:32:46.718 "params": { 00:32:46.718 "name": "Nvme1", 00:32:46.718 "trtype": "tcp", 00:32:46.718 "traddr": "10.0.0.2", 00:32:46.718 "adrfam": "ipv4", 00:32:46.718 "trsvcid": "4420", 00:32:46.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:46.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:46.718 "hdgst": false, 00:32:46.718 "ddgst": false 00:32:46.718 }, 00:32:46.718 "method": "bdev_nvme_attach_controller" 00:32:46.718 },{ 00:32:46.718 "params": { 00:32:46.718 "name": "Nvme2", 00:32:46.718 "trtype": "tcp", 00:32:46.718 "traddr": "10.0.0.2", 00:32:46.718 "adrfam": "ipv4", 00:32:46.718 "trsvcid": "4420", 00:32:46.718 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:46.718 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:46.718 "hdgst": false, 00:32:46.718 "ddgst": false 00:32:46.718 }, 00:32:46.718 "method": "bdev_nvme_attach_controller" 00:32:46.718 }' 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:46.718 10:06:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.718 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:46.718 ... 00:32:46.718 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:46.718 ... 00:32:46.718 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:46.718 ... 00:32:46.718 fio-3.35 00:32:46.718 Starting 24 threads 00:32:58.920 00:32:58.920 filename0: (groupid=0, jobs=1): err= 0: pid=3915800: Wed Nov 20 10:06:34 2024 00:32:58.920 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.9MiB/10029msec) 00:32:58.920 slat (nsec): min=8088, max=86752, avg=12688.99, stdev=6569.81 00:32:58.920 clat (msec): min=26, max=204, avg=34.94, stdev=13.89 00:32:58.920 lat (msec): min=26, max=204, avg=34.96, stdev=13.89 00:32:58.920 clat percentiles (msec): 00:32:58.920 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.920 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.920 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:58.920 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 178], 99.95th=[ 178], 00:32:58.920 | 99.99th=[ 205] 00:32:58.920 bw ( KiB/s): min= 496, max= 2048, per=4.16%, avg=1824.00, stdev=334.21, samples=20 00:32:58.920 iops : min= 124, max= 512, avg=456.00, stdev=83.55, samples=20 00:32:58.920 lat (msec) : 50=98.60%, 250=1.40% 00:32:58.920 cpu : usr=98.11%, sys=1.48%, ctx=21, majf=0, minf=82 00:32:58.920 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.920 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.920 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.920 filename0: (groupid=0, jobs=1): err= 0: pid=3915801: Wed Nov 20 10:06:34 2024 00:32:58.920 read: IOPS=455, BW=1820KiB/s (1864kB/s)(17.8MiB/10020msec) 00:32:58.920 slat (usec): min=6, max=109, avg=28.55, stdev=22.84 00:32:58.920 clat (msec): min=22, max=212, avg=34.91, stdev=15.32 00:32:58.920 lat (msec): min=22, max=213, avg=34.93, stdev=15.32 00:32:58.920 clat percentiles (msec): 00:32:58.920 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.920 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.920 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:58.920 | 99.00th=[ 131], 99.50th=[ 186], 99.90th=[ 213], 99.95th=[ 213], 00:32:58.920 | 99.99th=[ 213] 00:32:58.920 bw ( KiB/s): min= 384, max= 2048, per=4.15%, avg=1817.60, stdev=351.40, samples=20 00:32:58.920 iops : min= 96, max= 512, avg=454.40, stdev=87.85, samples=20 00:32:58.920 lat (msec) : 50=98.55%, 100=0.39%, 250=1.05% 00:32:58.920 cpu : usr=97.17%, sys=1.93%, ctx=127, majf=0, minf=76 00:32:58.920 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.920 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.920 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.920 filename0: (groupid=0, jobs=1): err= 0: pid=3915802: Wed Nov 20 10:06:34 2024 00:32:58.920 read: IOPS=453, BW=1813KiB/s (1857kB/s)(17.8MiB/10023msec) 00:32:58.920 slat (usec): min=11, max=131, avg=55.28, stdev=24.09 00:32:58.920 clat (msec): min=18, max=268, avg=34.82, stdev=18.82 00:32:58.920 lat (msec): min=18, max=268, avg=34.87, stdev=18.82 00:32:58.920 clat percentiles (msec): 00:32:58.920 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.920 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:32:58.920 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.920 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 268], 99.95th=[ 268], 00:32:58.920 | 99.99th=[ 268] 00:32:58.920 bw ( KiB/s): min= 256, max= 2048, per=4.12%, avg=1805.47, stdev=395.43, samples=19 00:32:58.920 iops : min= 64, max= 512, avg=451.37, stdev=98.86, samples=19 00:32:58.920 lat (msec) : 20=1.14%, 50=97.80%, 250=0.70%, 500=0.35% 00:32:58.920 cpu : usr=97.39%, sys=1.76%, ctx=99, majf=0, minf=45 00:32:58.920 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:32:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.921 filename0: (groupid=0, jobs=1): err= 0: pid=3915803: Wed Nov 20 10:06:34 2024 00:32:58.921 read: IOPS=454, BW=1817KiB/s (1861kB/s)(17.8MiB/10002msec) 00:32:58.921 slat (nsec): min=5032, max=67249, avg=29959.50, stdev=11496.36 00:32:58.921 clat (msec): min=18, max=352, avg=34.94, stdev=21.21 00:32:58.921 lat (msec): min=18, max=352, avg=34.97, stdev=21.21 00:32:58.921 clat percentiles (msec): 00:32:58.921 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.921 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.921 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.921 | 99.00th=[ 75], 99.50th=[ 186], 99.90th=[ 355], 99.95th=[ 355], 00:32:58.921 | 99.99th=[ 355] 00:32:58.921 bw ( KiB/s): min= 128, max= 2048, per=4.13%, avg=1812.21, stdev=414.25, samples=19 00:32:58.921 iops : min= 32, max= 512, avg=453.05, stdev=103.56, samples=19 00:32:58.921 lat (msec) : 20=1.19%, 50=97.76%, 100=0.35%, 250=0.35%, 500=0.35% 00:32:58.921 cpu : usr=98.28%, sys=1.13%, ctx=143, majf=0, minf=67 00:32:58.921 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:32:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.921 filename0: (groupid=0, jobs=1): err= 0: pid=3915804: Wed Nov 20 10:06:34 2024 00:32:58.921 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10005msec) 00:32:58.921 slat (nsec): min=5454, max=81428, avg=29987.51, stdev=13664.58 00:32:58.921 clat (msec): min=18, max=239, avg=34.85, stdev=15.49 00:32:58.921 lat (msec): min=18, max=239, avg=34.88, stdev=15.49 00:32:58.921 clat percentiles (msec): 00:32:58.921 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.921 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.921 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.921 | 99.00th=[ 131], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 213], 00:32:58.921 | 99.99th=[ 241] 00:32:58.921 bw ( KiB/s): min= 384, max= 2048, per=4.15%, avg=1818.95, stdev=360.98, samples=19 00:32:58.921 iops : min= 96, max= 512, avg=454.74, stdev=90.24, samples=19 00:32:58.921 lat (msec) : 20=0.09%, 50=98.55%, 100=0.31%, 250=1.05% 00:32:58.921 cpu : usr=97.40%, sys=1.67%, ctx=190, majf=0, minf=56 00:32:58.921 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.921 filename0: (groupid=0, jobs=1): err= 0: pid=3915805: Wed Nov 20 10:06:34 2024 00:32:58.921 read: IOPS=453, BW=1815KiB/s (1858kB/s)(17.7MiB/10011msec) 00:32:58.921 slat (usec): min=8, max=127, avg=50.37, stdev=25.37 00:32:58.921 clat (msec): min=13, max=426, avg=34.81, stdev=25.02 00:32:58.921 lat (msec): min=13, max=426, avg=34.87, stdev=25.02 00:32:58.921 clat percentiles (msec): 00:32:58.921 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.921 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:32:58.921 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.921 | 99.00th=[ 35], 99.50th=[ 184], 99.90th=[ 426], 99.95th=[ 426], 00:32:58.921 | 99.99th=[ 426] 00:32:58.921 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1804.63, stdev=417.59, samples=19 00:32:58.921 iops : min= 32, max= 512, avg=451.16, stdev=104.40, samples=19 00:32:58.921 lat (msec) : 20=0.35%, 50=98.94%, 250=0.35%, 500=0.35% 00:32:58.921 cpu : usr=97.27%, sys=1.79%, ctx=90, majf=0, minf=57 00:32:58.921 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 issued rwts: total=4542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.921 filename0: (groupid=0, jobs=1): err= 0: pid=3915806: Wed Nov 20 10:06:34 2024 00:32:58.921 read: IOPS=453, BW=1814KiB/s (1858kB/s)(17.8MiB/10018msec) 00:32:58.921 slat (nsec): min=10224, max=76546, avg=32056.49, stdev=10554.25 00:32:58.921 clat (msec): min=18, max=268, avg=34.99, stdev=18.62 00:32:58.921 lat (msec): min=18, max=268, avg=35.02, stdev=18.62 00:32:58.921 clat percentiles (msec): 00:32:58.921 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.921 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.921 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.921 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 268], 99.95th=[ 268], 00:32:58.921 | 99.99th=[ 268] 00:32:58.921 bw ( KiB/s): min= 256, max= 1920, per=4.12%, avg=1805.47, stdev=386.12, samples=19 00:32:58.921 iops : min= 64, max= 480, avg=451.37, stdev=96.53, samples=19 00:32:58.921 lat (msec) : 20=0.04%, 50=98.90%, 250=0.70%, 500=0.35% 00:32:58.921 cpu : usr=97.40%, sys=1.67%, ctx=96, majf=0, minf=52 00:32:58.921 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.921 filename0: (groupid=0, jobs=1): err= 0: pid=3915807: Wed Nov 20 10:06:34 2024 00:32:58.921 read: IOPS=453, BW=1815KiB/s (1859kB/s)(17.8MiB/10013msec) 00:32:58.921 slat (usec): min=10, max=139, avg=38.69, stdev=17.06 00:32:58.921 clat (msec): min=15, max=351, avg=34.91, stdev=22.91 00:32:58.921 lat (msec): min=15, max=351, avg=34.95, stdev=22.91 00:32:58.921 clat percentiles (msec): 00:32:58.921 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.921 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:32:58.921 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.921 | 99.00th=[ 35], 99.50th=[ 253], 99.90th=[ 351], 99.95th=[ 351], 00:32:58.921 | 99.99th=[ 351] 00:32:58.921 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1805.47, stdev=417.82, samples=19 00:32:58.921 iops : min= 32, max= 512, avg=451.37, stdev=104.45, samples=19 00:32:58.921 lat (msec) : 20=0.35%, 50=98.94%, 500=0.70% 00:32:58.921 cpu : usr=97.66%, sys=1.51%, ctx=113, majf=0, minf=59 00:32:58.921 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.921 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.921 filename1: (groupid=0, jobs=1): err= 0: pid=3915808: Wed Nov 20 10:06:34 2024 00:32:58.921 read: IOPS=457, BW=1831KiB/s (1875kB/s)(17.9MiB/10029msec) 00:32:58.921 slat (usec): min=4, max=118, avg=40.44, stdev=31.02 00:32:58.921 clat (msec): min=3, max=258, avg=34.57, stdev=17.08 00:32:58.921 lat (msec): min=3, max=258, avg=34.61, stdev=17.08 00:32:58.921 clat percentiles (msec): 00:32:58.921 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.921 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.921 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.921 | 99.00th=[ 90], 99.50th=[ 194], 99.90th=[ 259], 99.95th=[ 259], 00:32:58.921 | 99.99th=[ 259] 00:32:58.921 bw ( KiB/s): min= 640, max= 2048, per=4.18%, avg=1830.40, stdev=305.45, samples=20 00:32:58.921 iops : min= 160, max= 512, avg=457.60, stdev=76.36, samples=20 00:32:58.921 lat (msec) : 4=0.35%, 10=0.35%, 50=97.91%, 100=0.70%, 250=0.35% 00:32:58.921 lat (msec) : 500=0.35% 00:32:58.921 cpu : usr=97.91%, sys=1.45%, ctx=61, majf=0, minf=77 00:32:58.922 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.922 filename1: (groupid=0, jobs=1): err= 0: pid=3915809: Wed Nov 20 10:06:34 2024 00:32:58.922 read: IOPS=453, BW=1816KiB/s (1859kB/s)(17.8MiB/10011msec) 00:32:58.922 slat (nsec): min=7630, max=81581, avg=35138.81, stdev=12557.99 00:32:58.922 clat (msec): min=15, max=349, avg=34.94, stdev=22.79 00:32:58.922 lat (msec): min=15, max=349, avg=34.98, stdev=22.79 00:32:58.922 clat percentiles (msec): 00:32:58.922 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.922 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.922 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.922 | 99.00th=[ 35], 99.50th=[ 253], 99.90th=[ 351], 99.95th=[ 351], 00:32:58.922 | 99.99th=[ 351] 00:32:58.922 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1805.47, stdev=417.82, samples=19 00:32:58.922 iops : min= 32, max= 512, avg=451.37, stdev=104.45, samples=19 00:32:58.922 lat (msec) : 20=0.35%, 50=98.94%, 500=0.70% 00:32:58.922 cpu : usr=98.47%, sys=1.12%, ctx=16, majf=0, minf=38 00:32:58.922 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.922 filename1: (groupid=0, jobs=1): err= 0: pid=3915810: Wed Nov 20 10:06:34 2024 00:32:58.922 read: IOPS=453, BW=1815KiB/s (1859kB/s)(17.8MiB/10013msec) 00:32:58.922 slat (usec): min=8, max=105, avg=35.12, stdev=12.99 00:32:58.922 clat (msec): min=15, max=351, avg=34.96, stdev=22.89 00:32:58.922 lat (msec): min=15, max=351, avg=35.00, stdev=22.89 00:32:58.922 clat percentiles (msec): 00:32:58.922 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.922 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.922 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.922 | 99.00th=[ 35], 99.50th=[ 253], 99.90th=[ 351], 99.95th=[ 351], 00:32:58.922 | 99.99th=[ 351] 00:32:58.922 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1805.47, stdev=417.82, samples=19 00:32:58.922 iops : min= 32, max= 512, avg=451.37, stdev=104.45, samples=19 00:32:58.922 lat (msec) : 20=0.35%, 50=98.94%, 500=0.70% 00:32:58.922 cpu : usr=98.36%, sys=1.18%, ctx=31, majf=0, minf=45 00:32:58.922 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:58.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.922 filename1: (groupid=0, jobs=1): err= 0: pid=3915811: Wed Nov 20 10:06:34 2024 00:32:58.922 read: IOPS=515, BW=2063KiB/s (2112kB/s)(20.2MiB/10011msec) 00:32:58.922 slat (usec): min=7, max=111, avg=22.44, stdev=19.63 00:32:58.922 clat (msec): min=12, max=535, avg=30.89, stdev=28.96 00:32:58.922 lat (msec): min=12, max=535, avg=30.91, stdev=28.96 00:32:58.922 clat percentiles (msec): 00:32:58.922 | 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 24], 00:32:58.922 | 30.00th=[ 25], 40.00th=[ 27], 50.00th=[ 33], 60.00th=[ 33], 00:32:58.922 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:58.922 | 99.00th=[ 47], 99.50th=[ 89], 99.90th=[ 535], 99.95th=[ 535], 00:32:58.922 | 99.99th=[ 535] 00:32:58.922 bw ( KiB/s): min= 1648, max= 2656, per=4.89%, avg=2141.33, stdev=291.29, samples=18 00:32:58.922 iops : min= 412, max= 664, avg=535.33, stdev=72.82, samples=18 00:32:58.922 lat (msec) : 20=8.37%, 50=91.01%, 100=0.27%, 250=0.04%, 750=0.31% 00:32:58.922 cpu : usr=97.64%, sys=1.62%, ctx=70, majf=0, minf=62 00:32:58.922 IO depths : 1=0.1%, 2=2.6%, 4=13.3%, 8=70.9%, 16=13.1%, 32=0.0%, >=64=0.0% 00:32:58.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 issued rwts: total=5162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.922 filename1: (groupid=0, jobs=1): err= 0: pid=3915812: Wed Nov 20 10:06:34 2024 00:32:58.922 read: IOPS=460, BW=1840KiB/s (1885kB/s)(18.0MiB/10015msec) 00:32:58.922 slat (usec): min=5, max=111, avg=22.55, stdev=19.95 00:32:58.922 clat (msec): min=3, max=191, avg=34.59, stdev=14.04 00:32:58.922 lat (msec): min=3, max=191, avg=34.61, stdev=14.04 00:32:58.922 clat percentiles (msec): 00:32:58.922 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.922 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.922 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:58.922 | 99.00th=[ 81], 99.50th=[ 188], 99.90th=[ 190], 99.95th=[ 190], 00:32:58.922 | 99.99th=[ 192] 00:32:58.922 bw ( KiB/s): min= 768, max= 2048, per=4.19%, avg=1836.80, stdev=270.02, samples=20 00:32:58.922 iops : min= 192, max= 512, avg=459.20, stdev=67.50, samples=20 00:32:58.922 lat (msec) : 4=0.30%, 10=0.39%, 20=1.22%, 50=96.40%, 100=0.95% 00:32:58.922 lat (msec) : 250=0.74% 00:32:58.922 cpu : usr=98.00%, sys=1.36%, ctx=61, majf=0, minf=45 00:32:58.922 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:58.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.922 filename1: (groupid=0, jobs=1): err= 0: pid=3915813: Wed Nov 20 10:06:34 2024 00:32:58.922 read: IOPS=454, BW=1819KiB/s (1862kB/s)(17.8MiB/10029msec) 00:32:58.922 slat (usec): min=8, max=127, avg=46.26, stdev=34.83 00:32:58.922 clat (msec): min=26, max=254, avg=34.79, stdev=17.74 00:32:58.922 lat (msec): min=26, max=254, avg=34.84, stdev=17.75 00:32:58.922 clat percentiles (msec): 00:32:58.922 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.922 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:32:58.922 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:32:58.922 | 99.00th=[ 131], 99.50th=[ 213], 99.90th=[ 255], 99.95th=[ 255], 00:32:58.922 | 99.99th=[ 255] 00:32:58.922 bw ( KiB/s): min= 384, max= 2048, per=4.15%, avg=1817.60, stdev=358.69, samples=20 00:32:58.922 iops : min= 96, max= 512, avg=454.40, stdev=89.67, samples=20 00:32:58.922 lat (msec) : 50=98.95%, 250=0.70%, 500=0.35% 00:32:58.922 cpu : usr=97.07%, sys=1.79%, ctx=135, majf=0, minf=69 00:32:58.922 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:58.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.922 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.922 filename1: (groupid=0, jobs=1): err= 0: pid=3915814: Wed Nov 20 10:06:34 2024 00:32:58.922 read: IOPS=453, BW=1815KiB/s (1859kB/s)(17.8MiB/10014msec) 00:32:58.922 slat (nsec): min=7700, max=84043, avg=22419.18, stdev=13224.33 00:32:58.923 clat (msec): min=26, max=268, avg=35.09, stdev=18.35 00:32:58.923 lat (msec): min=26, max=268, avg=35.11, stdev=18.35 00:32:58.923 clat percentiles (msec): 00:32:58.923 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.923 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.923 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.923 | 99.00th=[ 178], 99.50th=[ 178], 99.90th=[ 268], 99.95th=[ 268], 00:32:58.923 | 99.99th=[ 268] 00:32:58.923 bw ( KiB/s): min= 256, max= 1920, per=4.12%, avg=1805.47, stdev=386.12, samples=19 00:32:58.923 iops : min= 64, max= 480, avg=451.37, stdev=96.53, samples=19 00:32:58.923 lat (msec) : 50=98.94%, 250=0.70%, 500=0.35% 00:32:58.923 cpu : usr=98.37%, sys=1.19%, ctx=28, majf=0, minf=61 00:32:58.923 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.923 filename1: (groupid=0, jobs=1): err= 0: pid=3915815: Wed Nov 20 10:06:34 2024 00:32:58.923 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10007msec) 00:32:58.923 slat (usec): min=31, max=113, avg=78.13, stdev= 8.88 00:32:58.923 clat (msec): min=19, max=601, avg=34.66, stdev=30.50 00:32:58.923 lat (msec): min=20, max=601, avg=34.74, stdev=30.50 00:32:58.923 clat percentiles (msec): 00:32:58.923 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.923 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:32:58.923 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.923 | 99.00th=[ 35], 99.50th=[ 89], 99.90th=[ 535], 99.95th=[ 535], 00:32:58.923 | 99.99th=[ 600] 00:32:58.923 bw ( KiB/s): min= 1664, max= 2048, per=4.33%, avg=1898.67, stdev=79.15, samples=18 00:32:58.923 iops : min= 416, max= 512, avg=474.67, stdev=19.79, samples=18 00:32:58.923 lat (msec) : 20=0.02%, 50=99.32%, 100=0.31%, 750=0.35% 00:32:58.923 cpu : usr=95.87%, sys=2.37%, ctx=227, majf=0, minf=39 00:32:58.923 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.923 filename2: (groupid=0, jobs=1): err= 0: pid=3915816: Wed Nov 20 10:06:34 2024 00:32:58.923 read: IOPS=453, BW=1814KiB/s (1857kB/s)(17.7MiB/10016msec) 00:32:58.923 slat (nsec): min=4063, max=84619, avg=32699.13, stdev=12085.13 00:32:58.923 clat (msec): min=15, max=354, avg=35.00, stdev=23.09 00:32:58.923 lat (msec): min=15, max=354, avg=35.04, stdev=23.09 00:32:58.923 clat percentiles (msec): 00:32:58.923 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.923 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.923 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.923 | 99.00th=[ 35], 99.50th=[ 253], 99.90th=[ 355], 99.95th=[ 355], 00:32:58.923 | 99.99th=[ 355] 00:32:58.923 bw ( KiB/s): min= 128, max= 2048, per=4.13%, avg=1811.20, stdev=407.48, samples=20 00:32:58.923 iops : min= 32, max= 512, avg=452.80, stdev=101.87, samples=20 00:32:58.923 lat (msec) : 20=0.31%, 50=98.99%, 500=0.70% 00:32:58.923 cpu : usr=97.50%, sys=1.72%, ctx=134, majf=0, minf=50 00:32:58.923 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 issued rwts: total=4542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.923 filename2: (groupid=0, jobs=1): err= 0: pid=3915817: Wed Nov 20 10:06:34 2024 00:32:58.923 read: IOPS=453, BW=1815KiB/s (1858kB/s)(17.8MiB/10017msec) 00:32:58.923 slat (usec): min=8, max=114, avg=35.57, stdev=15.47 00:32:58.923 clat (msec): min=26, max=295, avg=34.95, stdev=18.63 00:32:58.923 lat (msec): min=26, max=295, avg=34.98, stdev=18.63 00:32:58.923 clat percentiles (msec): 00:32:58.923 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.923 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.923 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.923 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 268], 99.95th=[ 268], 00:32:58.923 | 99.99th=[ 296] 00:32:58.923 bw ( KiB/s): min= 256, max= 1920, per=4.12%, avg=1805.47, stdev=386.12, samples=19 00:32:58.923 iops : min= 64, max= 480, avg=451.37, stdev=96.53, samples=19 00:32:58.923 lat (msec) : 50=98.94%, 250=0.70%, 500=0.35% 00:32:58.923 cpu : usr=97.24%, sys=1.82%, ctx=159, majf=0, minf=31 00:32:58.923 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.923 filename2: (groupid=0, jobs=1): err= 0: pid=3915818: Wed Nov 20 10:06:34 2024 00:32:58.923 read: IOPS=453, BW=1815KiB/s (1858kB/s)(17.8MiB/10015msec) 00:32:58.923 slat (usec): min=10, max=123, avg=50.12, stdev=24.87 00:32:58.923 clat (msec): min=15, max=353, avg=34.82, stdev=23.02 00:32:58.923 lat (msec): min=15, max=353, avg=34.87, stdev=23.02 00:32:58.923 clat percentiles (msec): 00:32:58.923 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.923 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:32:58.923 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.923 | 99.00th=[ 35], 99.50th=[ 253], 99.90th=[ 355], 99.95th=[ 355], 00:32:58.923 | 99.99th=[ 355] 00:32:58.923 bw ( KiB/s): min= 128, max= 2048, per=4.13%, avg=1811.20, stdev=407.48, samples=20 00:32:58.923 iops : min= 32, max= 512, avg=452.80, stdev=101.87, samples=20 00:32:58.923 lat (msec) : 20=0.35%, 50=98.94%, 500=0.70% 00:32:58.923 cpu : usr=95.73%, sys=2.56%, ctx=597, majf=0, minf=65 00:32:58.923 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.923 filename2: (groupid=0, jobs=1): err= 0: pid=3915819: Wed Nov 20 10:06:34 2024 00:32:58.923 read: IOPS=453, BW=1815KiB/s (1859kB/s)(17.8MiB/10012msec) 00:32:58.923 slat (nsec): min=11403, max=87317, avg=34897.78, stdev=10734.77 00:32:58.923 clat (msec): min=15, max=349, avg=34.94, stdev=22.82 00:32:58.923 lat (msec): min=15, max=349, avg=34.97, stdev=22.82 00:32:58.923 clat percentiles (msec): 00:32:58.923 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.923 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:32:58.923 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.923 | 99.00th=[ 35], 99.50th=[ 253], 99.90th=[ 351], 99.95th=[ 351], 00:32:58.923 | 99.99th=[ 351] 00:32:58.923 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1805.47, stdev=417.82, samples=19 00:32:58.923 iops : min= 32, max= 512, avg=451.37, stdev=104.45, samples=19 00:32:58.923 lat (msec) : 20=0.35%, 50=98.94%, 500=0.70% 00:32:58.923 cpu : usr=97.88%, sys=1.52%, ctx=39, majf=0, minf=54 00:32:58.923 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:58.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.923 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.923 filename2: (groupid=0, jobs=1): err= 0: pid=3915820: Wed Nov 20 10:06:34 2024 00:32:58.923 read: IOPS=453, BW=1813KiB/s (1856kB/s)(17.8MiB/10026msec) 00:32:58.923 slat (usec): min=3, max=150, avg=73.38, stdev=20.22 00:32:58.923 clat (msec): min=26, max=260, avg=34.66, stdev=18.61 00:32:58.923 lat (msec): min=26, max=260, avg=34.73, stdev=18.61 00:32:58.923 clat percentiles (msec): 00:32:58.923 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.923 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:32:58.923 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.923 | 99.00th=[ 167], 99.50th=[ 213], 99.90th=[ 255], 99.95th=[ 255], 00:32:58.923 | 99.99th=[ 262] 00:32:58.923 bw ( KiB/s): min= 256, max= 1920, per=4.12%, avg=1805.47, stdev=386.12, samples=19 00:32:58.923 iops : min= 64, max= 480, avg=451.37, stdev=96.53, samples=19 00:32:58.923 lat (msec) : 50=98.94%, 250=0.70%, 500=0.35% 00:32:58.923 cpu : usr=96.75%, sys=1.90%, ctx=239, majf=0, minf=60 00:32:58.924 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.924 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.924 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.924 filename2: (groupid=0, jobs=1): err= 0: pid=3915821: Wed Nov 20 10:06:34 2024 00:32:58.924 read: IOPS=455, BW=1822KiB/s (1865kB/s)(17.8MiB/10013msec) 00:32:58.924 slat (nsec): min=3804, max=99612, avg=29017.29, stdev=13485.95 00:32:58.924 clat (msec): min=18, max=267, avg=34.89, stdev=17.41 00:32:58.924 lat (msec): min=18, max=267, avg=34.92, stdev=17.41 00:32:58.924 clat percentiles (msec): 00:32:58.924 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.924 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.924 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.924 | 99.00th=[ 132], 99.50th=[ 184], 99.90th=[ 268], 99.95th=[ 268], 00:32:58.924 | 99.99th=[ 268] 00:32:58.924 bw ( KiB/s): min= 513, max= 2048, per=4.15%, avg=1817.65, stdev=330.98, samples=20 00:32:58.924 iops : min= 128, max= 512, avg=454.40, stdev=82.80, samples=20 00:32:58.924 lat (msec) : 20=0.04%, 50=98.86%, 100=0.09%, 250=0.66%, 500=0.35% 00:32:58.924 cpu : usr=97.89%, sys=1.47%, ctx=87, majf=0, minf=65 00:32:58.924 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.924 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.924 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.924 filename2: (groupid=0, jobs=1): err= 0: pid=3915822: Wed Nov 20 10:06:34 2024 00:32:58.924 read: IOPS=453, BW=1812KiB/s (1856kB/s)(17.7MiB/10016msec) 00:32:58.924 slat (nsec): min=8417, max=81905, avg=35467.57, stdev=12858.79 00:32:58.924 clat (msec): min=15, max=354, avg=34.96, stdev=23.10 00:32:58.924 lat (msec): min=15, max=354, avg=35.00, stdev=23.10 00:32:58.924 clat percentiles (msec): 00:32:58.924 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.924 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:32:58.924 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.924 | 99.00th=[ 35], 99.50th=[ 253], 99.90th=[ 355], 99.95th=[ 355], 00:32:58.924 | 99.99th=[ 355] 00:32:58.924 bw ( KiB/s): min= 128, max= 2048, per=4.13%, avg=1811.20, stdev=407.48, samples=20 00:32:58.924 iops : min= 32, max= 512, avg=452.80, stdev=101.87, samples=20 00:32:58.924 lat (msec) : 20=0.22%, 50=99.07%, 500=0.71% 00:32:58.924 cpu : usr=98.31%, sys=1.25%, ctx=16, majf=0, minf=52 00:32:58.924 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.924 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.924 issued rwts: total=4538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.924 filename2: (groupid=0, jobs=1): err= 0: pid=3915823: Wed Nov 20 10:06:34 2024 00:32:58.924 read: IOPS=455, BW=1820KiB/s (1864kB/s)(17.8MiB/10020msec) 00:32:58.924 slat (usec): min=5, max=138, avg=39.54, stdev=29.59 00:32:58.924 clat (msec): min=31, max=212, avg=34.81, stdev=15.38 00:32:58.924 lat (msec): min=31, max=212, avg=34.85, stdev=15.38 00:32:58.924 clat percentiles (msec): 00:32:58.924 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:32:58.924 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:32:58.924 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 34], 00:32:58.924 | 99.00th=[ 131], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 213], 00:32:58.924 | 99.99th=[ 213] 00:32:58.924 bw ( KiB/s): min= 384, max= 2048, per=4.15%, avg=1817.60, stdev=351.40, samples=20 00:32:58.924 iops : min= 96, max= 512, avg=454.40, stdev=87.85, samples=20 00:32:58.924 lat (msec) : 50=98.55%, 100=0.39%, 250=1.05% 00:32:58.924 cpu : usr=97.43%, sys=1.72%, ctx=91, majf=0, minf=71 00:32:58.924 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:58.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.924 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.924 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:58.924 00:32:58.924 Run status group 0 (all jobs): 00:32:58.924 READ: bw=42.8MiB/s (44.9MB/s), 1810KiB/s-2063KiB/s (1853kB/s-2112kB/s), io=429MiB (450MB), run=10002-10029msec 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.924 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.925 bdev_null0 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.925 [2024-11-20 10:06:35.113991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.925 bdev_null1 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.925 { 00:32:58.925 "params": { 00:32:58.925 "name": "Nvme$subsystem", 00:32:58.925 "trtype": "$TEST_TRANSPORT", 00:32:58.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.925 "adrfam": "ipv4", 00:32:58.925 "trsvcid": "$NVMF_PORT", 00:32:58.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.925 "hdgst": ${hdgst:-false}, 00:32:58.925 "ddgst": ${ddgst:-false} 00:32:58.925 }, 00:32:58.925 "method": "bdev_nvme_attach_controller" 00:32:58.925 } 00:32:58.925 EOF 00:32:58.925 )") 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.925 { 00:32:58.925 "params": { 00:32:58.925 "name": "Nvme$subsystem", 00:32:58.925 "trtype": "$TEST_TRANSPORT", 00:32:58.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.925 "adrfam": "ipv4", 00:32:58.925 "trsvcid": "$NVMF_PORT", 00:32:58.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.925 "hdgst": ${hdgst:-false}, 00:32:58.925 "ddgst": ${ddgst:-false} 00:32:58.925 }, 00:32:58.925 "method": "bdev_nvme_attach_controller" 00:32:58.925 } 00:32:58.925 EOF 00:32:58.925 )") 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:58.925 10:06:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.925 "params": { 00:32:58.925 "name": "Nvme0", 00:32:58.925 "trtype": "tcp", 00:32:58.925 "traddr": "10.0.0.2", 00:32:58.925 "adrfam": "ipv4", 00:32:58.925 "trsvcid": "4420", 00:32:58.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:58.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:58.925 "hdgst": false, 00:32:58.925 "ddgst": false 00:32:58.925 }, 00:32:58.925 "method": "bdev_nvme_attach_controller" 00:32:58.925 },{ 00:32:58.925 "params": { 00:32:58.925 "name": "Nvme1", 00:32:58.925 "trtype": "tcp", 00:32:58.925 "traddr": "10.0.0.2", 00:32:58.925 "adrfam": "ipv4", 00:32:58.925 "trsvcid": "4420", 00:32:58.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.925 "hdgst": false, 00:32:58.925 "ddgst": false 00:32:58.926 }, 00:32:58.926 "method": "bdev_nvme_attach_controller" 00:32:58.926 }' 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:58.926 10:06:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:58.926 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:58.926 ... 00:32:58.926 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:58.926 ... 00:32:58.926 fio-3.35 00:32:58.926 Starting 4 threads 00:33:05.479 00:33:05.479 filename0: (groupid=0, jobs=1): err= 0: pid=3917201: Wed Nov 20 10:06:41 2024 00:33:05.479 read: IOPS=1916, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5003msec) 00:33:05.479 slat (nsec): min=6869, max=67459, avg=14965.54, stdev=9297.07 00:33:05.479 clat (usec): min=668, max=7878, avg=4125.34, stdev=564.79 00:33:05.479 lat (usec): min=685, max=7885, avg=4140.30, stdev=565.42 00:33:05.479 clat percentiles (usec): 00:33:05.479 | 1.00th=[ 2008], 5.00th=[ 3228], 10.00th=[ 3589], 20.00th=[ 3884], 00:33:05.479 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:05.479 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4817], 00:33:05.479 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7111], 99.95th=[ 7373], 00:33:05.479 | 99.99th=[ 7898] 00:33:05.479 bw ( KiB/s): min=14944, max=16192, per=25.49%, avg=15332.80, stdev=379.16, samples=10 00:33:05.479 iops : min= 1868, max= 2024, avg=1916.60, stdev=47.39, samples=10 00:33:05.479 lat (usec) : 750=0.01%, 1000=0.03% 00:33:05.479 lat (msec) : 2=0.94%, 4=25.93%, 10=73.09% 00:33:05.479 cpu : usr=94.90%, sys=4.60%, ctx=10, majf=0, minf=63 00:33:05.479 IO depths : 1=0.4%, 2=9.6%, 4=61.2%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:05.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.479 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.479 issued rwts: total=9590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.479 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:05.479 filename0: (groupid=0, jobs=1): err= 0: pid=3917202: Wed Nov 20 10:06:41 2024 00:33:05.479 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:33:05.479 slat (nsec): min=5302, max=68465, avg=18556.64, stdev=10105.79 00:33:05.479 clat (usec): min=966, max=7637, avg=4248.69, stdev=592.83 00:33:05.479 lat (usec): min=979, max=7650, avg=4267.25, stdev=592.46 00:33:05.479 clat percentiles (usec): 00:33:05.479 | 1.00th=[ 2442], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 4015], 00:33:05.479 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:33:05.479 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5211], 00:33:05.479 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 7439], 00:33:05.479 | 99.99th=[ 7635] 00:33:05.479 bw ( KiB/s): min=14656, max=14992, per=24.67%, avg=14838.10, stdev=96.73, samples=10 00:33:05.479 iops : min= 1832, max= 1874, avg=1854.70, stdev=12.11, samples=10 00:33:05.479 lat (usec) : 1000=0.01% 00:33:05.479 lat (msec) : 2=0.63%, 4=18.71%, 10=80.65% 00:33:05.479 cpu : usr=95.26%, sys=4.22%, ctx=7, majf=0, minf=98 00:33:05.479 IO depths : 1=0.2%, 2=15.2%, 4=57.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:05.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.479 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.479 issued rwts: total=9277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.479 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:05.479 filename1: (groupid=0, jobs=1): err= 0: pid=3917203: Wed Nov 20 10:06:41 2024 00:33:05.479 read: IOPS=1880, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5001msec) 00:33:05.479 slat (nsec): min=4961, max=78028, avg=20795.15, stdev=11822.16 00:33:05.479 clat (usec): min=926, max=7774, avg=4175.82, stdev=567.02 00:33:05.479 lat (usec): min=940, max=7788, avg=4196.62, stdev=567.30 00:33:05.479 clat percentiles (usec): 00:33:05.479 | 1.00th=[ 2376], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3949], 00:33:05.479 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:05.479 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5014], 00:33:05.479 | 99.00th=[ 6259], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7504], 00:33:05.479 | 99.99th=[ 7767] 00:33:05.479 bw ( KiB/s): min=14608, max=15440, per=24.90%, avg=14981.33, stdev=265.09, samples=9 00:33:05.479 iops : min= 1826, max= 1930, avg=1872.67, stdev=33.14, samples=9 00:33:05.479 lat (usec) : 1000=0.02% 00:33:05.479 lat (msec) : 2=0.63%, 4=23.40%, 10=75.95% 00:33:05.479 cpu : usr=93.32%, sys=4.84%, ctx=153, majf=0, minf=70 00:33:05.479 IO depths : 1=0.4%, 2=17.9%, 4=55.5%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:05.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.479 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.479 issued rwts: total=9403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.479 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:05.479 filename1: (groupid=0, jobs=1): err= 0: pid=3917204: Wed Nov 20 10:06:41 2024 00:33:05.479 read: IOPS=1869, BW=14.6MiB/s (15.3MB/s)(73.0MiB/5001msec) 00:33:05.479 slat (nsec): min=4895, max=75398, avg=20297.13, stdev=11859.50 00:33:05.479 clat (usec): min=831, max=7786, avg=4203.47, stdev=645.23 00:33:05.479 lat (usec): min=847, max=7805, avg=4223.77, stdev=645.43 00:33:05.479 clat percentiles (usec): 00:33:05.479 | 1.00th=[ 1991], 5.00th=[ 3359], 10.00th=[ 3720], 20.00th=[ 3982], 00:33:05.479 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:33:05.479 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5145], 00:33:05.479 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7439], 99.95th=[ 7504], 00:33:05.479 | 99.99th=[ 7767] 00:33:05.479 bw ( KiB/s): min=14560, max=15504, per=24.85%, avg=14948.50, stdev=256.41, samples=10 00:33:05.479 iops : min= 1820, max= 1938, avg=1868.50, stdev=32.07, samples=10 00:33:05.479 lat (usec) : 1000=0.10% 00:33:05.479 lat (msec) : 2=0.92%, 4=21.10%, 10=77.88% 00:33:05.479 cpu : usr=95.24%, sys=4.24%, ctx=9, majf=0, minf=114 00:33:05.479 IO depths : 1=0.2%, 2=16.5%, 4=56.0%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:05.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.479 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.479 issued rwts: total=9349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.479 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:05.479 00:33:05.479 Run status group 0 (all jobs): 00:33:05.479 READ: bw=58.7MiB/s (61.6MB/s), 14.5MiB/s-15.0MiB/s (15.2MB/s-15.7MB/s), io=294MiB (308MB), run=5001-5003msec 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.479 00:33:05.479 real 0m24.928s 00:33:05.479 user 4m32.356s 00:33:05.479 sys 0m6.935s 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.479 10:06:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:05.479 ************************************ 00:33:05.479 END TEST fio_dif_rand_params 00:33:05.479 ************************************ 00:33:05.479 10:06:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:05.479 10:06:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:05.479 10:06:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:05.479 10:06:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:05.479 ************************************ 00:33:05.479 START TEST fio_dif_digest 00:33:05.479 ************************************ 00:33:05.479 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:05.479 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:05.479 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:05.479 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:05.479 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:05.479 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:05.479 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:05.480 bdev_null0 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:05.480 [2024-11-20 10:06:41.634982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:05.480 { 00:33:05.480 "params": { 00:33:05.480 "name": "Nvme$subsystem", 00:33:05.480 "trtype": "$TEST_TRANSPORT", 00:33:05.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.480 "adrfam": "ipv4", 00:33:05.480 "trsvcid": "$NVMF_PORT", 00:33:05.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.480 "hdgst": ${hdgst:-false}, 00:33:05.480 "ddgst": ${ddgst:-false} 00:33:05.480 }, 00:33:05.480 "method": "bdev_nvme_attach_controller" 00:33:05.480 } 00:33:05.480 EOF 00:33:05.480 )") 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:05.480 "params": { 00:33:05.480 "name": "Nvme0", 00:33:05.480 "trtype": "tcp", 00:33:05.480 "traddr": "10.0.0.2", 00:33:05.480 "adrfam": "ipv4", 00:33:05.480 "trsvcid": "4420", 00:33:05.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:05.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:05.480 "hdgst": true, 00:33:05.480 "ddgst": true 00:33:05.480 }, 00:33:05.480 "method": "bdev_nvme_attach_controller" 00:33:05.480 }' 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:05.480 10:06:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:05.480 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:05.480 ... 00:33:05.480 fio-3.35 00:33:05.480 Starting 3 threads 00:33:17.673 00:33:17.673 filename0: (groupid=0, jobs=1): err= 0: pid=3917999: Wed Nov 20 10:06:52 2024 00:33:17.673 read: IOPS=222, BW=27.9MiB/s (29.2MB/s)(280MiB/10046msec) 00:33:17.673 slat (nsec): min=5638, max=69709, avg=16409.24, stdev=3373.55 00:33:17.673 clat (usec): min=10751, max=52064, avg=13415.05, stdev=1415.74 00:33:17.673 lat (usec): min=10766, max=52079, avg=13431.46, stdev=1415.66 00:33:17.673 clat percentiles (usec): 00:33:17.673 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 00:33:17.673 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:33:17.673 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14746], 00:33:17.673 | 99.00th=[15401], 99.50th=[15664], 99.90th=[19268], 99.95th=[50594], 00:33:17.673 | 99.99th=[52167] 00:33:17.673 bw ( KiB/s): min=28160, max=28928, per=35.22%, avg=28646.40, stdev=247.78, samples=20 00:33:17.673 iops : min= 220, max= 226, avg=223.80, stdev= 1.94, samples=20 00:33:17.673 lat (msec) : 20=99.91%, 100=0.09% 00:33:17.673 cpu : usr=90.21%, sys=7.49%, ctx=447, majf=0, minf=128 00:33:17.673 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.673 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.673 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:17.673 filename0: (groupid=0, jobs=1): err= 0: pid=3918000: Wed Nov 20 10:06:52 2024 00:33:17.673 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10046msec) 00:33:17.673 slat (nsec): min=4406, max=39868, avg=14124.01, stdev=1574.77 00:33:17.673 clat (usec): min=10931, max=52828, avg=14577.88, stdev=1448.73 00:33:17.673 lat (usec): min=10944, max=52843, avg=14592.00, stdev=1448.71 00:33:17.673 clat percentiles (usec): 00:33:17.673 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13304], 20.00th=[13829], 00:33:17.673 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:33:17.673 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16057], 00:33:17.673 | 99.00th=[16909], 99.50th=[17433], 99.90th=[21103], 99.95th=[46400], 00:33:17.673 | 99.99th=[52691] 00:33:17.673 bw ( KiB/s): min=25344, max=27136, per=32.42%, avg=26368.00, stdev=491.37, samples=20 00:33:17.673 iops : min= 198, max= 212, avg=206.00, stdev= 3.84, samples=20 00:33:17.673 lat (msec) : 20=99.81%, 50=0.15%, 100=0.05% 00:33:17.673 cpu : usr=94.51%, sys=4.98%, ctx=32, majf=0, minf=184 00:33:17.673 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.673 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.673 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:17.673 filename0: (groupid=0, jobs=1): err= 0: pid=3918002: Wed Nov 20 10:06:52 2024 00:33:17.673 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(260MiB/10045msec) 00:33:17.673 slat (nsec): min=4499, max=28258, avg=14055.64, stdev=1462.51 00:33:17.673 clat (usec): min=10795, max=51696, avg=14437.32, stdev=1430.49 00:33:17.673 lat (usec): min=10809, max=51710, avg=14451.37, stdev=1430.48 00:33:17.673 clat percentiles (usec): 00:33:17.673 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:33:17.673 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:33:17.673 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926], 00:33:17.673 | 99.00th=[16712], 99.50th=[16909], 99.90th=[19006], 99.95th=[48497], 00:33:17.674 | 99.99th=[51643] 00:33:17.674 bw ( KiB/s): min=25600, max=27392, per=32.72%, avg=26611.20, stdev=410.90, samples=20 00:33:17.674 iops : min= 200, max= 214, avg=207.90, stdev= 3.21, samples=20 00:33:17.674 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:33:17.674 cpu : usr=94.22%, sys=5.30%, ctx=16, majf=0, minf=68 00:33:17.674 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.674 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:17.674 00:33:17.674 Run status group 0 (all jobs): 00:33:17.674 READ: bw=79.4MiB/s (83.3MB/s), 25.7MiB/s-27.9MiB/s (26.9MB/s-29.2MB/s), io=798MiB (837MB), run=10045-10046msec 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.674 00:33:17.674 real 0m11.092s 00:33:17.674 user 0m29.136s 00:33:17.674 sys 0m2.059s 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.674 10:06:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:17.674 ************************************ 00:33:17.674 END TEST fio_dif_digest 00:33:17.674 ************************************ 00:33:17.674 10:06:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:17.674 10:06:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:17.674 rmmod nvme_tcp 00:33:17.674 rmmod nvme_fabrics 00:33:17.674 rmmod nvme_keyring 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3911276 ']' 00:33:17.674 10:06:52 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3911276 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3911276 ']' 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3911276 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3911276 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3911276' 00:33:17.674 killing process with pid 3911276 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3911276 00:33:17.674 10:06:52 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3911276 00:33:17.674 10:06:53 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:17.674 10:06:53 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:17.674 Waiting for block devices as requested 00:33:17.674 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:17.674 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:17.674 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:17.674 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:17.674 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:17.674 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:17.932 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:17.932 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:17.932 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:18.190 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:18.190 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:18.190 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:18.451 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:18.451 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:18.451 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:18.451 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:18.710 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:18.710 10:06:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.710 10:06:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:18.710 10:06:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.251 10:06:57 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:21.251 00:33:21.251 real 1m7.874s 00:33:21.251 user 6m30.074s 00:33:21.251 sys 0m18.507s 00:33:21.251 10:06:57 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:21.251 10:06:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:21.251 ************************************ 00:33:21.251 END TEST nvmf_dif 00:33:21.251 ************************************ 00:33:21.251 10:06:57 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:21.251 10:06:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:21.251 10:06:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:21.251 10:06:57 -- common/autotest_common.sh@10 -- # set +x 00:33:21.251 ************************************ 00:33:21.251 START TEST nvmf_abort_qd_sizes 00:33:21.251 ************************************ 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:21.251 * Looking for test storage... 00:33:21.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:21.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.251 --rc genhtml_branch_coverage=1 00:33:21.251 --rc genhtml_function_coverage=1 00:33:21.251 --rc genhtml_legend=1 00:33:21.251 --rc geninfo_all_blocks=1 00:33:21.251 --rc geninfo_unexecuted_blocks=1 00:33:21.251 00:33:21.251 ' 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:21.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.251 --rc genhtml_branch_coverage=1 00:33:21.251 --rc genhtml_function_coverage=1 00:33:21.251 --rc genhtml_legend=1 00:33:21.251 --rc geninfo_all_blocks=1 00:33:21.251 --rc geninfo_unexecuted_blocks=1 00:33:21.251 00:33:21.251 ' 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:21.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.251 --rc genhtml_branch_coverage=1 00:33:21.251 --rc genhtml_function_coverage=1 00:33:21.251 --rc genhtml_legend=1 00:33:21.251 --rc geninfo_all_blocks=1 00:33:21.251 --rc geninfo_unexecuted_blocks=1 00:33:21.251 00:33:21.251 ' 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:21.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.251 --rc genhtml_branch_coverage=1 00:33:21.251 --rc genhtml_function_coverage=1 00:33:21.251 --rc genhtml_legend=1 00:33:21.251 --rc geninfo_all_blocks=1 00:33:21.251 --rc geninfo_unexecuted_blocks=1 00:33:21.251 00:33:21.251 ' 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:21.251 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:21.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:21.252 10:06:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:23.156 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.156 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:23.157 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:23.157 Found net devices under 0000:09:00.0: cvl_0_0 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:23.157 Found net devices under 0000:09:00.1: cvl_0_1 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.157 10:06:59 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.157 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.157 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.157 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.157 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.157 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.157 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:33:23.157 00:33:23.157 --- 10.0.0.2 ping statistics --- 00:33:23.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.157 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:33:23.157 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:33:23.416 00:33:23.416 --- 10.0.0.1 ping statistics --- 00:33:23.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.416 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:33:23.416 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.416 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:23.416 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:23.416 10:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:24.790 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:24.790 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:24.790 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:24.790 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:24.790 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:24.790 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:24.790 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:24.790 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:24.790 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:24.790 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:24.790 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:24.790 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:24.790 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:24.790 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:24.790 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:24.790 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:25.729 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3922880 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3922880 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3922880 ']' 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.729 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:25.729 [2024-11-20 10:07:02.587062] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:33:25.729 [2024-11-20 10:07:02.587162] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.988 [2024-11-20 10:07:02.662694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:25.988 [2024-11-20 10:07:02.725030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.988 [2024-11-20 10:07:02.725082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.988 [2024-11-20 10:07:02.725110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.988 [2024-11-20 10:07:02.725121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.988 [2024-11-20 10:07:02.725131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.988 [2024-11-20 10:07:02.726762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.988 [2024-11-20 10:07:02.726826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:25.988 [2024-11-20 10:07:02.726895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:25.988 [2024-11-20 10:07:02.726898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.988 10:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.246 ************************************ 00:33:26.246 START TEST spdk_target_abort 00:33:26.246 ************************************ 00:33:26.246 10:07:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:33:26.246 10:07:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:26.246 10:07:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:33:26.246 10:07:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.246 10:07:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:29.525 spdk_targetn1 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:29.525 [2024-11-20 10:07:05.756147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:29.525 [2024-11-20 10:07:05.804672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:29.525 10:07:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:32.805 Initializing NVMe Controllers 00:33:32.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:32.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:32.805 Initialization complete. Launching workers. 00:33:32.805 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11949, failed: 0 00:33:32.805 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1205, failed to submit 10744 00:33:32.805 success 749, unsuccessful 456, failed 0 00:33:32.805 10:07:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:32.805 10:07:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:36.083 Initializing NVMe Controllers 00:33:36.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:36.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:36.083 Initialization complete. Launching workers. 00:33:36.083 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8657, failed: 0 00:33:36.083 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7432 00:33:36.083 success 310, unsuccessful 915, failed 0 00:33:36.083 10:07:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:36.084 10:07:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:39.417 Initializing NVMe Controllers 00:33:39.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:39.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:39.417 Initialization complete. Launching workers. 00:33:39.417 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30522, failed: 0 00:33:39.417 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2642, failed to submit 27880 00:33:39.417 success 487, unsuccessful 2155, failed 0 00:33:39.417 10:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:39.417 10:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.417 10:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.417 10:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.417 10:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:39.417 10:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.417 10:07:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3922880 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3922880 ']' 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3922880 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3922880 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:40.348 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3922880' 00:33:40.348 killing process with pid 3922880 00:33:40.349 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3922880 00:33:40.349 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3922880 00:33:40.608 00:33:40.608 real 0m14.413s 00:33:40.608 user 0m54.367s 00:33:40.608 sys 0m2.911s 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:40.608 ************************************ 00:33:40.608 END TEST spdk_target_abort 00:33:40.608 ************************************ 00:33:40.608 10:07:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:40.608 10:07:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.608 10:07:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.608 10:07:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:40.608 ************************************ 00:33:40.608 START TEST kernel_target_abort 00:33:40.608 ************************************ 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:40.608 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:41.984 Waiting for block devices as requested 00:33:41.984 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:41.984 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:41.984 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:41.984 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:41.984 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:42.243 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:42.243 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:42.243 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:42.500 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:42.500 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:42.500 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:42.758 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:42.758 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:42.758 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:42.758 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:43.017 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:43.017 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:43.275 No valid GPT data, bailing 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:43.275 10:07:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:33:43.275 00:33:43.275 Discovery Log Number of Records 2, Generation counter 2 00:33:43.275 =====Discovery Log Entry 0====== 00:33:43.275 trtype: tcp 00:33:43.275 adrfam: ipv4 00:33:43.275 subtype: current discovery subsystem 00:33:43.275 treq: not specified, sq flow control disable supported 00:33:43.275 portid: 1 00:33:43.275 trsvcid: 4420 00:33:43.275 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:43.275 traddr: 10.0.0.1 00:33:43.275 eflags: none 00:33:43.275 sectype: none 00:33:43.275 =====Discovery Log Entry 1====== 00:33:43.275 trtype: tcp 00:33:43.275 adrfam: ipv4 00:33:43.275 subtype: nvme subsystem 00:33:43.275 treq: not specified, sq flow control disable supported 00:33:43.275 portid: 1 00:33:43.275 trsvcid: 4420 00:33:43.275 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:43.275 traddr: 10.0.0.1 00:33:43.275 eflags: none 00:33:43.275 sectype: none 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:43.275 10:07:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:46.557 Initializing NVMe Controllers 00:33:46.557 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:46.557 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:46.557 Initialization complete. Launching workers. 00:33:46.557 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48501, failed: 0 00:33:46.557 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48501, failed to submit 0 00:33:46.557 success 0, unsuccessful 48501, failed 0 00:33:46.557 10:07:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:46.557 10:07:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:49.839 Initializing NVMe Controllers 00:33:49.839 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:49.839 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:49.839 Initialization complete. Launching workers. 00:33:49.839 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95135, failed: 0 00:33:49.839 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21594, failed to submit 73541 00:33:49.839 success 0, unsuccessful 21594, failed 0 00:33:49.839 10:07:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:49.839 10:07:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:53.124 Initializing NVMe Controllers 00:33:53.124 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:53.124 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:53.124 Initialization complete. Launching workers. 00:33:53.124 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89843, failed: 0 00:33:53.124 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22434, failed to submit 67409 00:33:53.124 success 0, unsuccessful 22434, failed 0 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:53.124 10:07:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:53.691 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:53.691 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:53.691 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:53.952 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:53.952 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:53.952 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:53.952 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:53.952 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:53.952 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:53.952 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:53.952 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:53.952 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:53.952 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:53.952 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:53.952 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:53.952 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:54.892 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:54.892 00:33:54.892 real 0m14.421s 00:33:54.892 user 0m6.113s 00:33:54.892 sys 0m3.456s 00:33:54.892 10:07:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:54.892 10:07:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:54.892 ************************************ 00:33:54.892 END TEST kernel_target_abort 00:33:54.892 ************************************ 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.151 rmmod nvme_tcp 00:33:55.151 rmmod nvme_fabrics 00:33:55.151 rmmod nvme_keyring 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3922880 ']' 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3922880 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3922880 ']' 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3922880 00:33:55.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3922880) - No such process 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3922880 is not found' 00:33:55.151 Process with pid 3922880 is not found 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:55.151 10:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:56.085 Waiting for block devices as requested 00:33:56.085 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:56.343 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:56.343 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:56.343 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:56.602 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:56.602 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:56.602 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:56.602 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:56.861 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:56.861 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:57.120 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:57.120 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:57.120 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:57.120 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:57.378 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:57.378 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:57.378 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:57.636 10:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.542 10:07:36 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:59.542 00:33:59.542 real 0m38.784s 00:33:59.542 user 1m2.755s 00:33:59.542 sys 0m10.115s 00:33:59.542 10:07:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:59.542 10:07:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:59.542 ************************************ 00:33:59.542 END TEST nvmf_abort_qd_sizes 00:33:59.542 ************************************ 00:33:59.542 10:07:36 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:59.542 10:07:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:59.542 10:07:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:59.542 10:07:36 -- common/autotest_common.sh@10 -- # set +x 00:33:59.542 ************************************ 00:33:59.542 START TEST keyring_file 00:33:59.542 ************************************ 00:33:59.542 10:07:36 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:59.802 * Looking for test storage... 00:33:59.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:59.802 10:07:36 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:59.802 10:07:36 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:33:59.802 10:07:36 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:59.802 10:07:36 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.802 10:07:36 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:59.803 10:07:36 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.803 10:07:36 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.803 --rc genhtml_branch_coverage=1 00:33:59.803 --rc genhtml_function_coverage=1 00:33:59.803 --rc genhtml_legend=1 00:33:59.803 --rc geninfo_all_blocks=1 00:33:59.803 --rc geninfo_unexecuted_blocks=1 00:33:59.803 00:33:59.803 ' 00:33:59.803 10:07:36 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.803 --rc genhtml_branch_coverage=1 00:33:59.803 --rc genhtml_function_coverage=1 00:33:59.803 --rc genhtml_legend=1 00:33:59.803 --rc geninfo_all_blocks=1 00:33:59.803 --rc geninfo_unexecuted_blocks=1 00:33:59.803 00:33:59.803 ' 00:33:59.803 10:07:36 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.803 --rc genhtml_branch_coverage=1 00:33:59.803 --rc genhtml_function_coverage=1 00:33:59.803 --rc genhtml_legend=1 00:33:59.803 --rc geninfo_all_blocks=1 00:33:59.803 --rc geninfo_unexecuted_blocks=1 00:33:59.803 00:33:59.803 ' 00:33:59.803 10:07:36 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.803 --rc genhtml_branch_coverage=1 00:33:59.803 --rc genhtml_function_coverage=1 00:33:59.803 --rc genhtml_legend=1 00:33:59.803 --rc geninfo_all_blocks=1 00:33:59.803 --rc geninfo_unexecuted_blocks=1 00:33:59.803 00:33:59.803 ' 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.803 10:07:36 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.803 10:07:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.803 10:07:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.803 10:07:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.803 10:07:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:59.803 10:07:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:59.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iMcB9aWWjy 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iMcB9aWWjy 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iMcB9aWWjy 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iMcB9aWWjy 00:33:59.803 10:07:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rvTdj2twLT 00:33:59.803 10:07:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:59.803 10:07:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:59.804 10:07:36 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:59.804 10:07:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:59.804 10:07:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:59.804 10:07:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rvTdj2twLT 00:33:59.804 10:07:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rvTdj2twLT 00:33:59.804 10:07:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rvTdj2twLT 00:33:59.804 10:07:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=3928654 00:33:59.804 10:07:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:59.804 10:07:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3928654 00:33:59.804 10:07:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3928654 ']' 00:33:59.804 10:07:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.804 10:07:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.804 10:07:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.804 10:07:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.804 10:07:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:00.063 [2024-11-20 10:07:36.732995] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:34:00.063 [2024-11-20 10:07:36.733068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928654 ] 00:34:00.063 [2024-11-20 10:07:36.796808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.063 [2024-11-20 10:07:36.852964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:00.322 10:07:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:00.322 [2024-11-20 10:07:37.105987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.322 null0 00:34:00.322 [2024-11-20 10:07:37.138050] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:00.322 [2024-11-20 10:07:37.138365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.322 10:07:37 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:00.322 [2024-11-20 10:07:37.162093] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:00.322 request: 00:34:00.322 { 00:34:00.322 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:00.322 "secure_channel": false, 00:34:00.322 "listen_address": { 00:34:00.322 "trtype": "tcp", 00:34:00.322 "traddr": "127.0.0.1", 00:34:00.322 "trsvcid": "4420" 00:34:00.322 }, 00:34:00.322 "method": "nvmf_subsystem_add_listener", 00:34:00.322 "req_id": 1 00:34:00.322 } 00:34:00.322 Got JSON-RPC error response 00:34:00.322 response: 00:34:00.322 { 00:34:00.322 "code": -32602, 00:34:00.322 "message": "Invalid parameters" 00:34:00.322 } 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:00.322 10:07:37 keyring_file -- keyring/file.sh@47 -- # bperfpid=3928665 00:34:00.322 10:07:37 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:00.322 10:07:37 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3928665 /var/tmp/bperf.sock 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3928665 ']' 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:00.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:00.322 10:07:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:00.322 [2024-11-20 10:07:37.210636] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:34:00.322 [2024-11-20 10:07:37.210696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928665 ] 00:34:00.581 [2024-11-20 10:07:37.276308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.581 [2024-11-20 10:07:37.333964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.581 10:07:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.581 10:07:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:00.581 10:07:37 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iMcB9aWWjy 00:34:00.581 10:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iMcB9aWWjy 00:34:00.839 10:07:37 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rvTdj2twLT 00:34:00.839 10:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rvTdj2twLT 00:34:01.098 10:07:37 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:01.098 10:07:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:01.098 10:07:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:01.098 10:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:01.098 10:07:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:01.356 10:07:38 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.iMcB9aWWjy == \/\t\m\p\/\t\m\p\.\i\M\c\B\9\a\W\W\j\y ]] 00:34:01.356 10:07:38 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:01.356 10:07:38 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:01.356 10:07:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:01.356 10:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:01.356 10:07:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:01.920 10:07:38 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.rvTdj2twLT == \/\t\m\p\/\t\m\p\.\r\v\T\d\j\2\t\w\L\T ]] 00:34:01.920 10:07:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:01.920 10:07:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:01.920 10:07:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:01.920 10:07:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:01.921 10:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:01.921 10:07:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:01.921 10:07:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:01.921 10:07:38 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:02.179 10:07:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:02.179 10:07:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:02.179 10:07:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:02.179 10:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:02.179 10:07:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:02.440 10:07:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:02.440 10:07:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:02.440 10:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:02.700 [2024-11-20 10:07:39.355254] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:02.700 nvme0n1 00:34:02.700 10:07:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:02.700 10:07:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:02.700 10:07:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:02.700 10:07:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:02.700 10:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:02.700 10:07:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:02.957 10:07:39 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:02.957 10:07:39 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:02.957 10:07:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:02.957 10:07:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:02.957 10:07:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:02.957 10:07:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:02.957 10:07:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:03.215 10:07:39 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:03.215 10:07:39 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:03.215 Running I/O for 1 seconds... 00:34:04.598 9928.00 IOPS, 38.78 MiB/s 00:34:04.598 Latency(us) 00:34:04.598 [2024-11-20T09:07:41.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.598 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:04.598 nvme0n1 : 1.01 9976.35 38.97 0.00 0.00 12788.25 5849.69 21165.70 00:34:04.598 [2024-11-20T09:07:41.512Z] =================================================================================================================== 00:34:04.598 [2024-11-20T09:07:41.512Z] Total : 9976.35 38.97 0.00 0.00 12788.25 5849.69 21165.70 00:34:04.598 { 00:34:04.598 "results": [ 00:34:04.598 { 00:34:04.598 "job": "nvme0n1", 00:34:04.598 "core_mask": "0x2", 00:34:04.598 "workload": "randrw", 00:34:04.598 "percentage": 50, 00:34:04.598 "status": "finished", 00:34:04.598 "queue_depth": 128, 00:34:04.598 "io_size": 4096, 00:34:04.598 "runtime": 1.008184, 00:34:04.598 "iops": 9976.353522769654, 00:34:04.598 "mibps": 38.97013094831896, 00:34:04.598 "io_failed": 0, 00:34:04.598 "io_timeout": 0, 00:34:04.598 "avg_latency_us": 12788.253596400138, 00:34:04.598 "min_latency_us": 5849.694814814815, 00:34:04.598 "max_latency_us": 21165.70074074074 00:34:04.598 } 00:34:04.598 ], 00:34:04.598 "core_count": 1 00:34:04.598 } 00:34:04.598 10:07:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:04.598 10:07:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:04.598 10:07:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:04.598 10:07:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:04.598 10:07:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:04.598 10:07:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:04.598 10:07:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:04.598 10:07:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:04.856 10:07:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:04.856 10:07:41 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:04.856 10:07:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:04.856 10:07:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:04.856 10:07:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:04.856 10:07:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:04.856 10:07:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:05.115 10:07:41 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:05.115 10:07:41 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:05.115 10:07:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:05.115 10:07:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:05.115 10:07:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:05.115 10:07:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.115 10:07:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:05.115 10:07:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:05.115 10:07:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:05.115 10:07:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:05.373 [2024-11-20 10:07:42.216839] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:05.373 [2024-11-20 10:07:42.217147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4510 (107): Transport endpoint is not connected 00:34:05.373 [2024-11-20 10:07:42.218139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c4510 (9): Bad file descriptor 00:34:05.373 [2024-11-20 10:07:42.219140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:05.373 [2024-11-20 10:07:42.219161] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:05.373 [2024-11-20 10:07:42.219189] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:05.373 [2024-11-20 10:07:42.219204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:05.373 request: 00:34:05.373 { 00:34:05.373 "name": "nvme0", 00:34:05.373 "trtype": "tcp", 00:34:05.373 "traddr": "127.0.0.1", 00:34:05.373 "adrfam": "ipv4", 00:34:05.373 "trsvcid": "4420", 00:34:05.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:05.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:05.373 "prchk_reftag": false, 00:34:05.373 "prchk_guard": false, 00:34:05.373 "hdgst": false, 00:34:05.373 "ddgst": false, 00:34:05.373 "psk": "key1", 00:34:05.373 "allow_unrecognized_csi": false, 00:34:05.373 "method": "bdev_nvme_attach_controller", 00:34:05.373 "req_id": 1 00:34:05.373 } 00:34:05.373 Got JSON-RPC error response 00:34:05.373 response: 00:34:05.373 { 00:34:05.373 "code": -5, 00:34:05.373 "message": "Input/output error" 00:34:05.373 } 00:34:05.373 10:07:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:05.373 10:07:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:05.373 10:07:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:05.373 10:07:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:05.373 10:07:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:05.373 10:07:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:05.373 10:07:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:05.373 10:07:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:05.373 10:07:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:05.373 10:07:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:05.630 10:07:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:05.630 10:07:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:05.630 10:07:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:05.630 10:07:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:05.630 10:07:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:05.630 10:07:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:05.630 10:07:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:05.888 10:07:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:05.888 10:07:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:05.888 10:07:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:06.146 10:07:43 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:06.146 10:07:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:06.712 10:07:43 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:06.712 10:07:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.712 10:07:43 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:06.712 10:07:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:06.712 10:07:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.iMcB9aWWjy 00:34:06.712 10:07:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iMcB9aWWjy 00:34:06.712 10:07:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:06.712 10:07:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iMcB9aWWjy 00:34:06.712 10:07:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:06.712 10:07:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:06.712 10:07:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:06.712 10:07:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:06.712 10:07:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iMcB9aWWjy 00:34:06.712 10:07:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iMcB9aWWjy 00:34:06.971 [2024-11-20 10:07:43.846198] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iMcB9aWWjy': 0100660 00:34:06.971 [2024-11-20 10:07:43.846230] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:06.971 request: 00:34:06.971 { 00:34:06.971 "name": "key0", 00:34:06.971 "path": "/tmp/tmp.iMcB9aWWjy", 00:34:06.971 "method": "keyring_file_add_key", 00:34:06.971 "req_id": 1 00:34:06.971 } 00:34:06.971 Got JSON-RPC error response 00:34:06.971 response: 00:34:06.971 { 00:34:06.971 "code": -1, 00:34:06.971 "message": "Operation not permitted" 00:34:06.971 } 00:34:06.971 10:07:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:06.971 10:07:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:06.971 10:07:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:06.971 10:07:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:06.971 10:07:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.iMcB9aWWjy 00:34:06.971 10:07:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iMcB9aWWjy 00:34:06.971 10:07:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iMcB9aWWjy 00:34:07.228 10:07:44 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.iMcB9aWWjy 00:34:07.486 10:07:44 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:07.486 10:07:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:07.486 10:07:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:07.486 10:07:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:07.486 10:07:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:07.486 10:07:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:07.743 10:07:44 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:07.743 10:07:44 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:07.743 10:07:44 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:07.743 10:07:44 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:07.743 10:07:44 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:07.743 10:07:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:07.743 10:07:44 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:07.743 10:07:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:07.744 10:07:44 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:07.744 10:07:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:08.001 [2024-11-20 10:07:44.676437] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iMcB9aWWjy': No such file or directory 00:34:08.001 [2024-11-20 10:07:44.676466] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:08.001 [2024-11-20 10:07:44.676497] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:08.001 [2024-11-20 10:07:44.676511] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:08.001 [2024-11-20 10:07:44.676524] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:08.001 [2024-11-20 10:07:44.676536] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:08.001 request: 00:34:08.001 { 00:34:08.001 "name": "nvme0", 00:34:08.001 "trtype": "tcp", 00:34:08.001 "traddr": "127.0.0.1", 00:34:08.001 "adrfam": "ipv4", 00:34:08.001 "trsvcid": "4420", 00:34:08.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.001 "prchk_reftag": false, 00:34:08.001 "prchk_guard": false, 00:34:08.001 "hdgst": false, 00:34:08.001 "ddgst": false, 00:34:08.001 "psk": "key0", 00:34:08.001 "allow_unrecognized_csi": false, 00:34:08.001 "method": "bdev_nvme_attach_controller", 00:34:08.001 "req_id": 1 00:34:08.001 } 00:34:08.001 Got JSON-RPC error response 00:34:08.001 response: 00:34:08.001 { 00:34:08.001 "code": -19, 00:34:08.001 "message": "No such device" 00:34:08.001 } 00:34:08.001 10:07:44 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:08.001 10:07:44 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:08.001 10:07:44 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:08.001 10:07:44 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:08.001 10:07:44 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:08.001 10:07:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:08.259 10:07:44 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:08.259 10:07:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:08.259 10:07:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:08.259 10:07:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:08.259 10:07:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:08.259 10:07:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:08.259 10:07:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8DOwLKerIN 00:34:08.259 10:07:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:08.259 10:07:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:08.259 10:07:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:08.259 10:07:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:08.259 10:07:44 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:08.259 10:07:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:08.259 10:07:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:08.259 10:07:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8DOwLKerIN 00:34:08.259 10:07:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8DOwLKerIN 00:34:08.259 10:07:45 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.8DOwLKerIN 00:34:08.259 10:07:45 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8DOwLKerIN 00:34:08.259 10:07:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8DOwLKerIN 00:34:08.518 10:07:45 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:08.518 10:07:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:08.776 nvme0n1 00:34:08.776 10:07:45 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:08.776 10:07:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:08.776 10:07:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:08.776 10:07:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:08.776 10:07:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.776 10:07:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:09.034 10:07:45 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:09.034 10:07:45 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:09.034 10:07:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:09.292 10:07:46 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:09.292 10:07:46 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:09.292 10:07:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:09.292 10:07:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:09.292 10:07:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.553 10:07:46 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:09.553 10:07:46 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:09.553 10:07:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:09.553 10:07:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:09.553 10:07:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:09.553 10:07:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.553 10:07:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:09.811 10:07:46 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:09.811 10:07:46 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:09.811 10:07:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:10.069 10:07:46 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:10.069 10:07:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:10.069 10:07:46 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:10.635 10:07:47 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:10.635 10:07:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8DOwLKerIN 00:34:10.635 10:07:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8DOwLKerIN 00:34:10.635 10:07:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rvTdj2twLT 00:34:10.635 10:07:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rvTdj2twLT 00:34:10.893 10:07:47 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:10.893 10:07:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:11.458 nvme0n1 00:34:11.458 10:07:48 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:11.458 10:07:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:11.716 10:07:48 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:11.716 "subsystems": [ 00:34:11.716 { 00:34:11.716 "subsystem": "keyring", 00:34:11.717 "config": [ 00:34:11.717 { 00:34:11.717 "method": "keyring_file_add_key", 00:34:11.717 "params": { 00:34:11.717 "name": "key0", 00:34:11.717 "path": "/tmp/tmp.8DOwLKerIN" 00:34:11.717 } 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "method": "keyring_file_add_key", 00:34:11.717 "params": { 00:34:11.717 "name": "key1", 00:34:11.717 "path": "/tmp/tmp.rvTdj2twLT" 00:34:11.717 } 00:34:11.717 } 00:34:11.717 ] 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "subsystem": "iobuf", 00:34:11.717 "config": [ 00:34:11.717 { 00:34:11.717 "method": "iobuf_set_options", 00:34:11.717 "params": { 00:34:11.717 "small_pool_count": 8192, 00:34:11.717 "large_pool_count": 1024, 00:34:11.717 "small_bufsize": 8192, 00:34:11.717 "large_bufsize": 135168, 00:34:11.717 "enable_numa": false 00:34:11.717 } 00:34:11.717 } 00:34:11.717 ] 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "subsystem": "sock", 00:34:11.717 "config": [ 00:34:11.717 { 00:34:11.717 "method": "sock_set_default_impl", 00:34:11.717 "params": { 00:34:11.717 "impl_name": "posix" 00:34:11.717 } 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "method": "sock_impl_set_options", 00:34:11.717 "params": { 00:34:11.717 "impl_name": "ssl", 00:34:11.717 "recv_buf_size": 4096, 00:34:11.717 "send_buf_size": 4096, 00:34:11.717 "enable_recv_pipe": true, 00:34:11.717 "enable_quickack": false, 00:34:11.717 "enable_placement_id": 0, 00:34:11.717 "enable_zerocopy_send_server": true, 00:34:11.717 "enable_zerocopy_send_client": false, 00:34:11.717 "zerocopy_threshold": 0, 00:34:11.717 "tls_version": 0, 00:34:11.717 "enable_ktls": false 00:34:11.717 } 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "method": "sock_impl_set_options", 00:34:11.717 "params": { 00:34:11.717 "impl_name": "posix", 00:34:11.717 "recv_buf_size": 2097152, 00:34:11.717 "send_buf_size": 2097152, 00:34:11.717 "enable_recv_pipe": true, 00:34:11.717 "enable_quickack": false, 00:34:11.717 "enable_placement_id": 0, 00:34:11.717 "enable_zerocopy_send_server": true, 00:34:11.717 "enable_zerocopy_send_client": false, 00:34:11.717 "zerocopy_threshold": 0, 00:34:11.717 "tls_version": 0, 00:34:11.717 "enable_ktls": false 00:34:11.717 } 00:34:11.717 } 00:34:11.717 ] 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "subsystem": "vmd", 00:34:11.717 "config": [] 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "subsystem": "accel", 00:34:11.717 "config": [ 00:34:11.717 { 00:34:11.717 "method": "accel_set_options", 00:34:11.717 "params": { 00:34:11.717 "small_cache_size": 128, 00:34:11.717 "large_cache_size": 16, 00:34:11.717 "task_count": 2048, 00:34:11.717 "sequence_count": 2048, 00:34:11.717 "buf_count": 2048 00:34:11.717 } 00:34:11.717 } 00:34:11.717 ] 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "subsystem": "bdev", 00:34:11.717 "config": [ 00:34:11.717 { 00:34:11.717 "method": "bdev_set_options", 00:34:11.717 "params": { 00:34:11.717 "bdev_io_pool_size": 65535, 00:34:11.717 "bdev_io_cache_size": 256, 00:34:11.717 "bdev_auto_examine": true, 00:34:11.717 "iobuf_small_cache_size": 128, 00:34:11.717 "iobuf_large_cache_size": 16 00:34:11.717 } 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "method": "bdev_raid_set_options", 00:34:11.717 "params": { 00:34:11.717 "process_window_size_kb": 1024, 00:34:11.717 "process_max_bandwidth_mb_sec": 0 00:34:11.717 } 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "method": "bdev_iscsi_set_options", 00:34:11.717 "params": { 00:34:11.717 "timeout_sec": 30 00:34:11.717 } 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "method": "bdev_nvme_set_options", 00:34:11.717 "params": { 00:34:11.717 "action_on_timeout": "none", 00:34:11.717 "timeout_us": 0, 00:34:11.717 "timeout_admin_us": 0, 00:34:11.717 "keep_alive_timeout_ms": 10000, 00:34:11.717 "arbitration_burst": 0, 00:34:11.717 "low_priority_weight": 0, 00:34:11.717 "medium_priority_weight": 0, 00:34:11.717 "high_priority_weight": 0, 00:34:11.717 "nvme_adminq_poll_period_us": 10000, 00:34:11.717 "nvme_ioq_poll_period_us": 0, 00:34:11.717 "io_queue_requests": 512, 00:34:11.717 "delay_cmd_submit": true, 00:34:11.717 "transport_retry_count": 4, 00:34:11.717 "bdev_retry_count": 3, 00:34:11.717 "transport_ack_timeout": 0, 00:34:11.717 "ctrlr_loss_timeout_sec": 0, 00:34:11.717 "reconnect_delay_sec": 0, 00:34:11.717 "fast_io_fail_timeout_sec": 0, 00:34:11.717 "disable_auto_failback": false, 00:34:11.717 "generate_uuids": false, 00:34:11.717 "transport_tos": 0, 00:34:11.717 "nvme_error_stat": false, 00:34:11.717 "rdma_srq_size": 0, 00:34:11.717 "io_path_stat": false, 00:34:11.717 "allow_accel_sequence": false, 00:34:11.717 "rdma_max_cq_size": 0, 00:34:11.717 "rdma_cm_event_timeout_ms": 0, 00:34:11.717 "dhchap_digests": [ 00:34:11.717 "sha256", 00:34:11.717 "sha384", 00:34:11.717 "sha512" 00:34:11.717 ], 00:34:11.717 "dhchap_dhgroups": [ 00:34:11.717 "null", 00:34:11.717 "ffdhe2048", 00:34:11.717 "ffdhe3072", 00:34:11.717 "ffdhe4096", 00:34:11.717 "ffdhe6144", 00:34:11.717 "ffdhe8192" 00:34:11.717 ] 00:34:11.717 } 00:34:11.717 }, 00:34:11.717 { 00:34:11.717 "method": "bdev_nvme_attach_controller", 00:34:11.717 "params": { 00:34:11.717 "name": "nvme0", 00:34:11.718 "trtype": "TCP", 00:34:11.718 "adrfam": "IPv4", 00:34:11.718 "traddr": "127.0.0.1", 00:34:11.718 "trsvcid": "4420", 00:34:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:11.718 "prchk_reftag": false, 00:34:11.718 "prchk_guard": false, 00:34:11.718 "ctrlr_loss_timeout_sec": 0, 00:34:11.718 "reconnect_delay_sec": 0, 00:34:11.718 "fast_io_fail_timeout_sec": 0, 00:34:11.718 "psk": "key0", 00:34:11.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:11.718 "hdgst": false, 00:34:11.718 "ddgst": false, 00:34:11.718 "multipath": "multipath" 00:34:11.718 } 00:34:11.718 }, 00:34:11.718 { 00:34:11.718 "method": "bdev_nvme_set_hotplug", 00:34:11.718 "params": { 00:34:11.718 "period_us": 100000, 00:34:11.718 "enable": false 00:34:11.718 } 00:34:11.718 }, 00:34:11.718 { 00:34:11.718 "method": "bdev_wait_for_examine" 00:34:11.718 } 00:34:11.718 ] 00:34:11.718 }, 00:34:11.718 { 00:34:11.718 "subsystem": "nbd", 00:34:11.718 "config": [] 00:34:11.718 } 00:34:11.718 ] 00:34:11.718 }' 00:34:11.718 10:07:48 keyring_file -- keyring/file.sh@115 -- # killprocess 3928665 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3928665 ']' 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3928665 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3928665 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3928665' 00:34:11.718 killing process with pid 3928665 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@973 -- # kill 3928665 00:34:11.718 Received shutdown signal, test time was about 1.000000 seconds 00:34:11.718 00:34:11.718 Latency(us) 00:34:11.718 [2024-11-20T09:07:48.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.718 [2024-11-20T09:07:48.632Z] =================================================================================================================== 00:34:11.718 [2024-11-20T09:07:48.632Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:11.718 10:07:48 keyring_file -- common/autotest_common.sh@978 -- # wait 3928665 00:34:11.976 10:07:48 keyring_file -- keyring/file.sh@118 -- # bperfpid=3930130 00:34:11.976 10:07:48 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3930130 /var/tmp/bperf.sock 00:34:11.976 10:07:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3930130 ']' 00:34:11.976 10:07:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:11.976 10:07:48 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:11.976 10:07:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.976 10:07:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:11.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:11.976 10:07:48 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:11.976 "subsystems": [ 00:34:11.976 { 00:34:11.977 "subsystem": "keyring", 00:34:11.977 "config": [ 00:34:11.977 { 00:34:11.977 "method": "keyring_file_add_key", 00:34:11.977 "params": { 00:34:11.977 "name": "key0", 00:34:11.977 "path": "/tmp/tmp.8DOwLKerIN" 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "keyring_file_add_key", 00:34:11.977 "params": { 00:34:11.977 "name": "key1", 00:34:11.977 "path": "/tmp/tmp.rvTdj2twLT" 00:34:11.977 } 00:34:11.977 } 00:34:11.977 ] 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "subsystem": "iobuf", 00:34:11.977 "config": [ 00:34:11.977 { 00:34:11.977 "method": "iobuf_set_options", 00:34:11.977 "params": { 00:34:11.977 "small_pool_count": 8192, 00:34:11.977 "large_pool_count": 1024, 00:34:11.977 "small_bufsize": 8192, 00:34:11.977 "large_bufsize": 135168, 00:34:11.977 "enable_numa": false 00:34:11.977 } 00:34:11.977 } 00:34:11.977 ] 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "subsystem": "sock", 00:34:11.977 "config": [ 00:34:11.977 { 00:34:11.977 "method": "sock_set_default_impl", 00:34:11.977 "params": { 00:34:11.977 "impl_name": "posix" 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "sock_impl_set_options", 00:34:11.977 "params": { 00:34:11.977 "impl_name": "ssl", 00:34:11.977 "recv_buf_size": 4096, 00:34:11.977 "send_buf_size": 4096, 00:34:11.977 "enable_recv_pipe": true, 00:34:11.977 "enable_quickack": false, 00:34:11.977 "enable_placement_id": 0, 00:34:11.977 "enable_zerocopy_send_server": true, 00:34:11.977 "enable_zerocopy_send_client": false, 00:34:11.977 "zerocopy_threshold": 0, 00:34:11.977 "tls_version": 0, 00:34:11.977 "enable_ktls": false 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "sock_impl_set_options", 00:34:11.977 "params": { 00:34:11.977 "impl_name": "posix", 00:34:11.977 "recv_buf_size": 2097152, 00:34:11.977 "send_buf_size": 2097152, 00:34:11.977 "enable_recv_pipe": true, 00:34:11.977 "enable_quickack": false, 00:34:11.977 "enable_placement_id": 0, 00:34:11.977 "enable_zerocopy_send_server": true, 00:34:11.977 "enable_zerocopy_send_client": false, 00:34:11.977 "zerocopy_threshold": 0, 00:34:11.977 "tls_version": 0, 00:34:11.977 "enable_ktls": false 00:34:11.977 } 00:34:11.977 } 00:34:11.977 ] 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "subsystem": "vmd", 00:34:11.977 "config": [] 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "subsystem": "accel", 00:34:11.977 "config": [ 00:34:11.977 { 00:34:11.977 "method": "accel_set_options", 00:34:11.977 "params": { 00:34:11.977 "small_cache_size": 128, 00:34:11.977 "large_cache_size": 16, 00:34:11.977 "task_count": 2048, 00:34:11.977 "sequence_count": 2048, 00:34:11.977 "buf_count": 2048 00:34:11.977 } 00:34:11.977 } 00:34:11.977 ] 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "subsystem": "bdev", 00:34:11.977 "config": [ 00:34:11.977 { 00:34:11.977 "method": "bdev_set_options", 00:34:11.977 "params": { 00:34:11.977 "bdev_io_pool_size": 65535, 00:34:11.977 "bdev_io_cache_size": 256, 00:34:11.977 "bdev_auto_examine": true, 00:34:11.977 "iobuf_small_cache_size": 128, 00:34:11.977 "iobuf_large_cache_size": 16 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "bdev_raid_set_options", 00:34:11.977 "params": { 00:34:11.977 "process_window_size_kb": 1024, 00:34:11.977 "process_max_bandwidth_mb_sec": 0 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "bdev_iscsi_set_options", 00:34:11.977 "params": { 00:34:11.977 "timeout_sec": 30 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "bdev_nvme_set_options", 00:34:11.977 "params": { 00:34:11.977 "action_on_timeout": "none", 00:34:11.977 "timeout_us": 0, 00:34:11.977 "timeout_admin_us": 0, 00:34:11.977 "keep_alive_timeout_ms": 10000, 00:34:11.977 "arbitration_burst": 0, 00:34:11.977 "low_priority_weight": 0, 00:34:11.977 "medium_priority_weight": 0, 00:34:11.977 "high_priority_weight": 0, 00:34:11.977 "nvme_adminq_poll_period_us": 10000, 00:34:11.977 "nvme_ioq_poll_period_us": 0, 00:34:11.977 "io_queue_requests": 512, 00:34:11.977 "delay_cmd_submit": true, 00:34:11.977 "transport_retry_count": 4, 00:34:11.977 "bdev_retry_count": 3, 00:34:11.977 "transport_ack_timeout": 0, 00:34:11.977 "ctrlr_loss_timeout_sec": 0, 00:34:11.977 "reconnect_delay_sec": 0, 00:34:11.977 "fast_io_fail_timeout_sec": 0, 00:34:11.977 "disable_auto_failback": false, 00:34:11.977 "generate_uuids": false, 00:34:11.977 "transport_tos": 0, 00:34:11.977 "nvme_error_stat": false, 00:34:11.977 "rdma_srq_size": 0, 00:34:11.977 "io_path_stat": false, 00:34:11.977 "allow_accel_sequence": false, 00:34:11.977 "rdma_max_cq_size": 0, 00:34:11.977 "rdma_cm_event_timeout_ms": 0, 00:34:11.977 "dhchap_digests": [ 00:34:11.977 "sha256", 00:34:11.977 "sha384", 00:34:11.977 "sha512" 00:34:11.977 ], 00:34:11.977 "dhchap_dhgroups": [ 00:34:11.977 "null", 00:34:11.977 "ffdhe2048", 00:34:11.977 "ffdhe3072", 00:34:11.977 "ffdhe4096", 00:34:11.977 "ffdhe6144", 00:34:11.977 "ffdhe8192" 00:34:11.977 ] 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "bdev_nvme_attach_controller", 00:34:11.977 "params": { 00:34:11.977 "name": "nvme0", 00:34:11.977 "trtype": "TCP", 00:34:11.977 "adrfam": "IPv4", 00:34:11.977 "traddr": "127.0.0.1", 00:34:11.977 "trsvcid": "4420", 00:34:11.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:11.977 "prchk_reftag": false, 00:34:11.977 "prchk_guard": false, 00:34:11.977 "ctrlr_loss_timeout_sec": 0, 00:34:11.977 "reconnect_delay_sec": 0, 00:34:11.977 "fast_io_fail_timeout_sec": 0, 00:34:11.977 "psk": "key0", 00:34:11.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:11.977 "hdgst": false, 00:34:11.977 "ddgst": false, 00:34:11.977 "multipath": "multipath" 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "bdev_nvme_set_hotplug", 00:34:11.977 "params": { 00:34:11.977 "period_us": 100000, 00:34:11.977 "enable": false 00:34:11.977 } 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "method": "bdev_wait_for_examine" 00:34:11.977 } 00:34:11.977 ] 00:34:11.977 }, 00:34:11.977 { 00:34:11.977 "subsystem": "nbd", 00:34:11.977 "config": [] 00:34:11.977 } 00:34:11.977 ] 00:34:11.977 }' 00:34:11.977 10:07:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.977 10:07:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:11.977 [2024-11-20 10:07:48.765737] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:34:11.977 [2024-11-20 10:07:48.765839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930130 ] 00:34:11.977 [2024-11-20 10:07:48.840451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.244 [2024-11-20 10:07:48.902933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.244 [2024-11-20 10:07:49.082625] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:12.561 10:07:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.561 10:07:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:12.561 10:07:49 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:12.561 10:07:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:12.561 10:07:49 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:12.561 10:07:49 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:12.561 10:07:49 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:12.834 10:07:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:12.834 10:07:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:12.834 10:07:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:12.834 10:07:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:12.834 10:07:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:12.834 10:07:49 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:12.834 10:07:49 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:12.835 10:07:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:12.835 10:07:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:12.835 10:07:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:12.835 10:07:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:12.835 10:07:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:13.400 10:07:50 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:13.400 10:07:50 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:13.400 10:07:50 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:13.400 10:07:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:13.400 10:07:50 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:13.400 10:07:50 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:13.400 10:07:50 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8DOwLKerIN /tmp/tmp.rvTdj2twLT 00:34:13.400 10:07:50 keyring_file -- keyring/file.sh@20 -- # killprocess 3930130 00:34:13.400 10:07:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3930130 ']' 00:34:13.400 10:07:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3930130 00:34:13.400 10:07:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:13.400 10:07:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:13.400 10:07:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930130 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930130' 00:34:13.659 killing process with pid 3930130 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@973 -- # kill 3930130 00:34:13.659 Received shutdown signal, test time was about 1.000000 seconds 00:34:13.659 00:34:13.659 Latency(us) 00:34:13.659 [2024-11-20T09:07:50.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.659 [2024-11-20T09:07:50.573Z] =================================================================================================================== 00:34:13.659 [2024-11-20T09:07:50.573Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@978 -- # wait 3930130 00:34:13.659 10:07:50 keyring_file -- keyring/file.sh@21 -- # killprocess 3928654 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3928654 ']' 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3928654 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:13.659 10:07:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3928654 00:34:13.917 10:07:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:13.917 10:07:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:13.917 10:07:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3928654' 00:34:13.917 killing process with pid 3928654 00:34:13.917 10:07:50 keyring_file -- common/autotest_common.sh@973 -- # kill 3928654 00:34:13.917 10:07:50 keyring_file -- common/autotest_common.sh@978 -- # wait 3928654 00:34:14.175 00:34:14.175 real 0m14.582s 00:34:14.175 user 0m37.203s 00:34:14.175 sys 0m3.156s 00:34:14.175 10:07:51 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.175 10:07:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:14.175 ************************************ 00:34:14.175 END TEST keyring_file 00:34:14.175 ************************************ 00:34:14.175 10:07:51 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:14.175 10:07:51 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:14.175 10:07:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:14.175 10:07:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.175 10:07:51 -- common/autotest_common.sh@10 -- # set +x 00:34:14.175 ************************************ 00:34:14.175 START TEST keyring_linux 00:34:14.175 ************************************ 00:34:14.175 10:07:51 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:14.175 Joined session keyring: 6148802 00:34:14.435 * Looking for test storage... 00:34:14.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:14.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.435 --rc genhtml_branch_coverage=1 00:34:14.435 --rc genhtml_function_coverage=1 00:34:14.435 --rc genhtml_legend=1 00:34:14.435 --rc geninfo_all_blocks=1 00:34:14.435 --rc geninfo_unexecuted_blocks=1 00:34:14.435 00:34:14.435 ' 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:14.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.435 --rc genhtml_branch_coverage=1 00:34:14.435 --rc genhtml_function_coverage=1 00:34:14.435 --rc genhtml_legend=1 00:34:14.435 --rc geninfo_all_blocks=1 00:34:14.435 --rc geninfo_unexecuted_blocks=1 00:34:14.435 00:34:14.435 ' 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:14.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.435 --rc genhtml_branch_coverage=1 00:34:14.435 --rc genhtml_function_coverage=1 00:34:14.435 --rc genhtml_legend=1 00:34:14.435 --rc geninfo_all_blocks=1 00:34:14.435 --rc geninfo_unexecuted_blocks=1 00:34:14.435 00:34:14.435 ' 00:34:14.435 10:07:51 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:14.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.435 --rc genhtml_branch_coverage=1 00:34:14.435 --rc genhtml_function_coverage=1 00:34:14.435 --rc genhtml_legend=1 00:34:14.435 --rc geninfo_all_blocks=1 00:34:14.435 --rc geninfo_unexecuted_blocks=1 00:34:14.435 00:34:14.435 ' 00:34:14.435 10:07:51 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:14.435 10:07:51 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.435 10:07:51 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.435 10:07:51 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.435 10:07:51 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.435 10:07:51 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.435 10:07:51 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:14.435 10:07:51 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.435 10:07:51 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:14.436 /tmp/:spdk-test:key0 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:14.436 10:07:51 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:14.436 10:07:51 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:14.436 /tmp/:spdk-test:key1 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3930622 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:14.436 10:07:51 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3930622 00:34:14.436 10:07:51 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3930622 ']' 00:34:14.436 10:07:51 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.436 10:07:51 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.436 10:07:51 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.436 10:07:51 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.436 10:07:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:14.695 [2024-11-20 10:07:51.356541] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:34:14.695 [2024-11-20 10:07:51.356655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930622 ] 00:34:14.695 [2024-11-20 10:07:51.422379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.695 [2024-11-20 10:07:51.479535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:14.953 10:07:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:14.953 [2024-11-20 10:07:51.728249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.953 null0 00:34:14.953 [2024-11-20 10:07:51.760324] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:14.953 [2024-11-20 10:07:51.760806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.953 10:07:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:14.953 104370159 00:34:14.953 10:07:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:14.953 626692537 00:34:14.953 10:07:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3930627 00:34:14.953 10:07:51 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:14.953 10:07:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3930627 /var/tmp/bperf.sock 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3930627 ']' 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:14.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.953 10:07:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:14.953 [2024-11-20 10:07:51.826395] Starting SPDK v25.01-pre git sha1 f549a9953 / DPDK 24.03.0 initialization... 00:34:14.953 [2024-11-20 10:07:51.826461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930627 ] 00:34:15.212 [2024-11-20 10:07:51.890945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.212 [2024-11-20 10:07:51.947786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.212 10:07:52 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.212 10:07:52 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:15.212 10:07:52 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:15.212 10:07:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:15.470 10:07:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:15.470 10:07:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:16.036 10:07:52 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:16.036 10:07:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:16.036 [2024-11-20 10:07:52.927321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:16.294 nvme0n1 00:34:16.294 10:07:53 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:16.294 10:07:53 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:16.294 10:07:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:16.294 10:07:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:16.294 10:07:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:16.294 10:07:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:16.553 10:07:53 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:16.553 10:07:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:16.553 10:07:53 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:16.553 10:07:53 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:16.553 10:07:53 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:16.553 10:07:53 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:16.553 10:07:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:16.811 10:07:53 keyring_linux -- keyring/linux.sh@25 -- # sn=104370159 00:34:16.811 10:07:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:16.811 10:07:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:16.811 10:07:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 104370159 == \1\0\4\3\7\0\1\5\9 ]] 00:34:16.811 10:07:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 104370159 00:34:16.811 10:07:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:16.811 10:07:53 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:16.811 Running I/O for 1 seconds... 00:34:18.185 11055.00 IOPS, 43.18 MiB/s 00:34:18.185 Latency(us) 00:34:18.185 [2024-11-20T09:07:55.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.185 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:18.185 nvme0n1 : 1.01 11068.86 43.24 0.00 0.00 11497.53 7912.87 19806.44 00:34:18.185 [2024-11-20T09:07:55.099Z] =================================================================================================================== 00:34:18.185 [2024-11-20T09:07:55.099Z] Total : 11068.86 43.24 0.00 0.00 11497.53 7912.87 19806.44 00:34:18.185 { 00:34:18.185 "results": [ 00:34:18.185 { 00:34:18.185 "job": "nvme0n1", 00:34:18.185 "core_mask": "0x2", 00:34:18.185 "workload": "randread", 00:34:18.185 "status": "finished", 00:34:18.185 "queue_depth": 128, 00:34:18.185 "io_size": 4096, 00:34:18.185 "runtime": 1.010402, 00:34:18.185 "iops": 11068.861700590458, 00:34:18.185 "mibps": 43.23774101793148, 00:34:18.185 "io_failed": 0, 00:34:18.185 "io_timeout": 0, 00:34:18.185 "avg_latency_us": 11497.5347215599, 00:34:18.185 "min_latency_us": 7912.8651851851855, 00:34:18.185 "max_latency_us": 19806.435555555556 00:34:18.185 } 00:34:18.185 ], 00:34:18.185 "core_count": 1 00:34:18.185 } 00:34:18.185 10:07:54 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:18.185 10:07:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:18.185 10:07:54 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:18.185 10:07:54 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:18.185 10:07:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:18.185 10:07:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:18.185 10:07:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:18.185 10:07:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:18.443 10:07:55 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:18.443 10:07:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:18.443 10:07:55 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:18.443 10:07:55 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:18.443 10:07:55 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:34:18.443 10:07:55 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:18.443 10:07:55 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:18.443 10:07:55 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.443 10:07:55 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:18.443 10:07:55 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.443 10:07:55 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:18.443 10:07:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:18.701 [2024-11-20 10:07:55.524551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:18.701 [2024-11-20 10:07:55.524620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba0bc0 (107): Transport endpoint is not connected 00:34:18.701 [2024-11-20 10:07:55.525589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba0bc0 (9): Bad file descriptor 00:34:18.701 [2024-11-20 10:07:55.526588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:18.701 [2024-11-20 10:07:55.526632] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:18.701 [2024-11-20 10:07:55.526653] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:18.701 [2024-11-20 10:07:55.526690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:18.701 request: 00:34:18.701 { 00:34:18.701 "name": "nvme0", 00:34:18.701 "trtype": "tcp", 00:34:18.701 "traddr": "127.0.0.1", 00:34:18.701 "adrfam": "ipv4", 00:34:18.701 "trsvcid": "4420", 00:34:18.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:18.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:18.701 "prchk_reftag": false, 00:34:18.701 "prchk_guard": false, 00:34:18.701 "hdgst": false, 00:34:18.701 "ddgst": false, 00:34:18.701 "psk": ":spdk-test:key1", 00:34:18.701 "allow_unrecognized_csi": false, 00:34:18.701 "method": "bdev_nvme_attach_controller", 00:34:18.702 "req_id": 1 00:34:18.702 } 00:34:18.702 Got JSON-RPC error response 00:34:18.702 response: 00:34:18.702 { 00:34:18.702 "code": -5, 00:34:18.702 "message": "Input/output error" 00:34:18.702 } 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@33 -- # sn=104370159 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 104370159 00:34:18.702 1 links removed 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@33 -- # sn=626692537 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 626692537 00:34:18.702 1 links removed 00:34:18.702 10:07:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3930627 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3930627 ']' 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3930627 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930627 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930627' 00:34:18.702 killing process with pid 3930627 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@973 -- # kill 3930627 00:34:18.702 Received shutdown signal, test time was about 1.000000 seconds 00:34:18.702 00:34:18.702 Latency(us) 00:34:18.702 [2024-11-20T09:07:55.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.702 [2024-11-20T09:07:55.616Z] =================================================================================================================== 00:34:18.702 [2024-11-20T09:07:55.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:18.702 10:07:55 keyring_linux -- common/autotest_common.sh@978 -- # wait 3930627 00:34:18.959 10:07:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3930622 00:34:18.959 10:07:55 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3930622 ']' 00:34:18.959 10:07:55 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3930622 00:34:18.959 10:07:55 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:18.959 10:07:55 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:18.959 10:07:55 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930622 00:34:18.959 10:07:55 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:18.959 10:07:55 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:18.959 10:07:55 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930622' 00:34:18.960 killing process with pid 3930622 00:34:18.960 10:07:55 keyring_linux -- common/autotest_common.sh@973 -- # kill 3930622 00:34:18.960 10:07:55 keyring_linux -- common/autotest_common.sh@978 -- # wait 3930622 00:34:19.527 00:34:19.527 real 0m5.128s 00:34:19.527 user 0m10.331s 00:34:19.527 sys 0m1.520s 00:34:19.527 10:07:56 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.527 10:07:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:19.527 ************************************ 00:34:19.527 END TEST keyring_linux 00:34:19.527 ************************************ 00:34:19.527 10:07:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:19.527 10:07:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:19.527 10:07:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:19.527 10:07:56 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:19.527 10:07:56 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:19.527 10:07:56 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:19.527 10:07:56 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:19.527 10:07:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.527 10:07:56 -- common/autotest_common.sh@10 -- # set +x 00:34:19.527 10:07:56 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:19.527 10:07:56 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:19.527 10:07:56 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:19.527 10:07:56 -- common/autotest_common.sh@10 -- # set +x 00:34:21.433 INFO: APP EXITING 00:34:21.433 INFO: killing all VMs 00:34:21.433 INFO: killing vhost app 00:34:21.433 INFO: EXIT DONE 00:34:22.811 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:34:22.811 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:34:22.811 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:34:22.811 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:34:22.811 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:34:22.811 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:34:22.811 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:34:22.811 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:34:22.811 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:34:22.811 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:34:22.811 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:34:22.811 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:34:22.811 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:34:22.811 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:34:22.811 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:34:22.811 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:34:22.811 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:34:24.187 Cleaning 00:34:24.187 Removing: /var/run/dpdk/spdk0/config 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:24.187 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:24.187 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:24.187 Removing: /var/run/dpdk/spdk1/config 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:24.187 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:24.187 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:24.187 Removing: /var/run/dpdk/spdk2/config 00:34:24.187 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:24.187 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:24.187 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:24.188 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:24.188 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:24.188 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:24.188 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:24.188 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:24.188 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:24.188 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:24.188 Removing: /var/run/dpdk/spdk3/config 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:24.188 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:24.188 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:24.188 Removing: /var/run/dpdk/spdk4/config 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:24.188 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:24.188 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:24.188 Removing: /dev/shm/bdev_svc_trace.1 00:34:24.188 Removing: /dev/shm/nvmf_trace.0 00:34:24.188 Removing: /dev/shm/spdk_tgt_trace.pid3608459 00:34:24.188 Removing: /var/run/dpdk/spdk0 00:34:24.188 Removing: /var/run/dpdk/spdk1 00:34:24.188 Removing: /var/run/dpdk/spdk2 00:34:24.188 Removing: /var/run/dpdk/spdk3 00:34:24.188 Removing: /var/run/dpdk/spdk4 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3606825 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3607568 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3608459 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3608841 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3609534 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3609674 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3610469 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3610515 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3610880 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3612596 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3613519 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3613837 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3614030 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3614244 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3614447 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3614602 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3614756 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3615061 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3615258 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3617769 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3617933 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3618093 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3618222 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3618527 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3618568 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3618964 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3618975 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3619260 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3619271 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3619437 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3619563 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3619954 00:34:24.188 Removing: /var/run/dpdk/spdk_pid3620108 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3620421 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3622540 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3625178 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3632313 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3632725 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3635249 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3635518 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3638053 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3641897 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3644590 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3651079 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3656376 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3657572 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3658319 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3668624 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3671037 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3699433 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3702666 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3706499 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3710890 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3710894 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3711438 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3712095 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3712750 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3713158 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3713278 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3713424 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3713557 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3713559 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3714216 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3714871 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3715464 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3715929 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3716046 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3716195 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3717597 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3718428 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3723655 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3751808 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3754735 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3755916 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3757233 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3757375 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3757517 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3757662 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3758102 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3759423 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3760280 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3760711 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3762301 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3762626 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3763189 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3765586 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3769620 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3769621 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3769622 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3771836 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3776571 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3779354 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3783122 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3784064 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3785160 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3786141 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3788903 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3791489 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3793852 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3798086 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3798088 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3800881 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3801015 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3801265 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3801535 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3801548 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3804308 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3804635 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3807307 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3809912 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3813338 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3816675 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3823171 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3827643 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3827647 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3840038 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3840566 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3840978 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3841495 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3842197 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3843026 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3843524 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3843935 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3846439 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3846600 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3850468 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3850553 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3853919 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3856535 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3863462 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3863873 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3866367 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3866645 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3869152 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3872845 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3875061 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3881998 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3887199 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3888388 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3889049 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3899125 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3901376 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3903369 00:34:24.447 Removing: /var/run/dpdk/spdk_pid3908418 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3908423 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3911330 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3912841 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3914750 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3915618 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3917025 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3917898 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3923241 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3923577 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3923969 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3925531 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3925926 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3926225 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3928654 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3928665 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3930130 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3930622 00:34:24.705 Removing: /var/run/dpdk/spdk_pid3930627 00:34:24.705 Clean 00:34:24.705 10:08:01 -- common/autotest_common.sh@1453 -- # return 0 00:34:24.705 10:08:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:24.705 10:08:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.705 10:08:01 -- common/autotest_common.sh@10 -- # set +x 00:34:24.705 10:08:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:24.705 10:08:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.705 10:08:01 -- common/autotest_common.sh@10 -- # set +x 00:34:24.705 10:08:01 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:24.705 10:08:01 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:24.706 10:08:01 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:24.706 10:08:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:24.706 10:08:01 -- spdk/autotest.sh@398 -- # hostname 00:34:24.706 10:08:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:24.962 geninfo: WARNING: invalid characters removed from testname! 00:34:57.021 10:08:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:00.304 10:08:36 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:03.585 10:08:39 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:06.113 10:08:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:09.396 10:08:45 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:12.692 10:08:48 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:15.253 10:08:51 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:15.253 10:08:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:15.253 10:08:52 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:15.253 10:08:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:15.253 10:08:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:15.253 10:08:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:15.253 + [[ -n 3536817 ]] 00:35:15.253 + sudo kill 3536817 00:35:15.264 [Pipeline] } 00:35:15.281 [Pipeline] // stage 00:35:15.288 [Pipeline] } 00:35:15.301 [Pipeline] // timeout 00:35:15.305 [Pipeline] } 00:35:15.317 [Pipeline] // catchError 00:35:15.322 [Pipeline] } 00:35:15.337 [Pipeline] // wrap 00:35:15.341 [Pipeline] } 00:35:15.353 [Pipeline] // catchError 00:35:15.360 [Pipeline] stage 00:35:15.362 [Pipeline] { (Epilogue) 00:35:15.373 [Pipeline] catchError 00:35:15.375 [Pipeline] { 00:35:15.386 [Pipeline] echo 00:35:15.387 Cleanup processes 00:35:15.392 [Pipeline] sh 00:35:15.678 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:15.678 3941318 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:15.691 [Pipeline] sh 00:35:16.025 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:16.025 ++ grep -v 'sudo pgrep' 00:35:16.025 ++ awk '{print $1}' 00:35:16.025 + sudo kill -9 00:35:16.025 + true 00:35:16.059 [Pipeline] sh 00:35:16.343 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:26.323 [Pipeline] sh 00:35:26.637 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:26.637 Artifacts sizes are good 00:35:26.651 [Pipeline] archiveArtifacts 00:35:26.658 Archiving artifacts 00:35:26.777 [Pipeline] sh 00:35:27.060 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:27.075 [Pipeline] cleanWs 00:35:27.086 [WS-CLEANUP] Deleting project workspace... 00:35:27.086 [WS-CLEANUP] Deferred wipeout is used... 00:35:27.092 [WS-CLEANUP] done 00:35:27.094 [Pipeline] } 00:35:27.114 [Pipeline] // catchError 00:35:27.126 [Pipeline] sh 00:35:27.404 + logger -p user.info -t JENKINS-CI 00:35:27.412 [Pipeline] } 00:35:27.427 [Pipeline] // stage 00:35:27.432 [Pipeline] } 00:35:27.448 [Pipeline] // node 00:35:27.454 [Pipeline] End of Pipeline 00:35:27.497 Finished: SUCCESS